Merge remote-tracking branch 'origin/master' into v024
* origin/master: (661 commits)
Bump version for release.
docs: update Changelog; closes #481
issue #481: core: preserve stderr TTY FD if one is present.
issue #481: avoid crash if disconnect occurs during forward_modules()
Add a few more important modules to preamble_size.py.
.ci: add verbiage for run_batches() too.
.ci: add README.md.
docs: update thanks
docs: lose "approaching stability" language, we're pretty good now
docs: fix changelog syntax/order/"20KB"
tests: add new compression parameter to mitogen_get_stack results
tests: disable affinity_test on Travis :/
issue #508: fix responder stats test due to new smaller parent.py.
issue #508: tests: skip minify_test Py2.4/2.5 for profiler.py.
tests: fix fallout from 36fb318adf5c56e729296c3efce84f4dd75ced4e
issue #520: add AIX auth failure string to su.
tests: move affinity_test to Ansible tests.
core: cProfile is not available in 2.4.
issue #505: docs: add new detail graph for one scenario.
docs: update and re-record profile graphs in docs; closes #505
service: fix PushFileService exception
tests: pad out localhost-*
service: start pool shutdown on broker shutdown.
master: .encode() needed for Py3.
ansible: stash PID files in CWD if requested for debugging.
issue #508: master: minify_safe_re must be bytes for Py3.
bench: tidy up and cpu-pin some more files.
tests: add localhost-x100
ansible: double the default pool size.
ansible: raise error with correct exception type.
issue #508: master: minify all Mitogen/ansible_mitogen sources.
parent: PartialZlib docstrings.
ansible: hacky parser to alow bools to be specified on command line
parent: pre-cache bootstrap if possible.
docs: update Changelog.
ansible: add mitogen_ssh_compression variable.
service: PushFileService never recorded a file as sent.
parent: synchronize get_core_source()
service: use correct profile aggregation name.
SyntaxError.
ansible: don't pin controller if <4 cores.
tests: make soak testing work reliably on vanilla.
docs: changelog tidyups.
ansible: document and make affinity stuff portable to non-Linux
ansible: fix affinity.py test failure on 2 cores.
ansible: preheat PluginLoader caches before fork.
tests: make mitogen_shutdown_all be run_once by default.
docs: update Changelog.
ansible: use Poller for WorkerProcess; closes #491.
ansible: new multiplexer/workers configuration
docs: update Changelog.
docs: update Changelog.
ansible: pin connection multiplexer to a single core
utils: pad out reset_affinity() and integrate with detach_popen()
utils: import reset_affinity() function.
master: set Router.profiling if MITOGEN_PROFILING variable present.
parent: don't kill children when profiling is active.
ansible: hook strategy and worker processes into profiler
profiler: import from linear2 branch
core: tidy up existing profiling code and support MITOGEN_PROFILE_FMT
issue #260: redundant if statement.
ansible: ensure MuxProcess MITOGEN_PROFILING results reach disk.
ansible/bench: make end= configurable.
master: cache sent/forwarded module names
Aggregate code coverage data across tox all runs
Allow independant control of coverage erase and reporting
Fix incorrect attempt to use coverage
docs: update Changelog; closes #527.
issue #527: catch new-style module tracebacks like vanilla.
Fix DeprecationWarning in mitogen.utils.run_with_router()
Generate coverage report even if some tests fail
ci: fix incorrect partition/rpartition from 8a4caea84f
issue #260: hide force-disconnect messages.
issue #498: fix shutdown crash
issue #260: avoid start_transmit()/on_transmit()/stop_transmit()
core: ensure broker profiling output reaches disk
master: keep is_stdlib_path() result as negative cache entry
ci: Allow DISTROS="debian*32" variable, and KEEP=1
Use develop mode in tox
issue #429: fix sudo regression.
misc: rename to scripts. tab completion!!
core: Latch._wake improvements
issue #498: prevent crash on double 'disconnect' signal.
issue #413: don't double-propagate DEL_ROUTE to parent.
issue #498: wrap Router dict mutations in a lock
issue #429: enable en_US locale to unbreak debops test.
issue #499: fix another mind-numbingly stupid vanilla inconsistency
issue #497: do our best to cope with crap upstream code
ssh: fix test to match updated log format.
issue #429: update Changelog.
issue #429: update Changelog.
issue #429: teach sudo about every know i18n password string.
issue #429: install i18n-related bits in test images.
ssh: tidy up logs and stream names.
tests: ensure file is closed in connection_test.
gcloud: small updates
tests: give ansible/gcloud/ its own requirements file.
issue #499: another totally moronic implementation difference
issue #499: disable new test on vanilla.
docs: update Changelog; closes #499.
...
pull/862/head
@ -0,0 +1,44 @@
|
|||||||
|
|
||||||
|
# `.ci`
|
||||||
|
|
||||||
|
This directory contains scripts for Travis CI and (more or less) Azure
|
||||||
|
Pipelines, but they will also happily run on any Debian-like machine.
|
||||||
|
|
||||||
|
The scripts are usually split into `_install` and `_test` steps. The `_install`
|
||||||
|
step will damage your machine, the `_test` step will just run the tests the way
|
||||||
|
CI runs them.
|
||||||
|
|
||||||
|
There is a common library, `ci_lib.py`, which just centralized a bunch of
|
||||||
|
random macros and also environment parsing.
|
||||||
|
|
||||||
|
Some of the scripts allow you to pass extra flags through to the component
|
||||||
|
under test, e.g. `../../.ci/ansible_tests.py -vvv` will run with verbose.
|
||||||
|
|
||||||
|
Hack these scripts until your heart is content. There is no pride to be found
|
||||||
|
here, just necessity.
|
||||||
|
|
||||||
|
|
||||||
|
### `ci_lib.run_batches()`
|
||||||
|
|
||||||
|
There are some weird looking functions to extract more paralellism from the
|
||||||
|
build. The above function takes lists of strings, arranging for the strings in
|
||||||
|
each list to run in order, but for the lists to run in parallel. That's great
|
||||||
|
for doing `setup.py install` while pulling a Docker container, for example.
|
||||||
|
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
* `VER`: Ansible version the `_install` script should install. Default changes
|
||||||
|
over time.
|
||||||
|
* `TARGET_COUNT`: number of targets for `debops_` run. Defaults to 2.
|
||||||
|
* `DISTRO`: the `mitogen_` tests need a target Docker container distro. This
|
||||||
|
name comes from the Docker Hub `mitogen` user, i.e. `mitogen/$DISTRO-test`
|
||||||
|
* `DISTROS`: the `ansible_` tests can run against multiple targets
|
||||||
|
simultaneously, which speeds things up. This is a space-separated list of
|
||||||
|
DISTRO names, but additionally, supports:
|
||||||
|
* `debian-py3`: when generating Ansible inventory file, set
|
||||||
|
`ansible_python_interpreter` to `python3`, i.e. run a test where the
|
||||||
|
target interpreter is Python 3.
|
||||||
|
* `debian*16`: generate 16 Docker containers running Debian. Also works
|
||||||
|
with -py3.
|
||||||
|
|
@ -0,0 +1,21 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
batches = [
|
||||||
|
[
|
||||||
|
# Must be installed separately, as PyNACL indirect requirement causes
|
||||||
|
# newer version to be installed if done in a single pip run.
|
||||||
|
'pip install "pycparser<2.19" "idna<2.7"',
|
||||||
|
'pip install '
|
||||||
|
'-r tests/requirements.txt '
|
||||||
|
'-r tests/ansible/requirements.txt',
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
batches.extend(
|
||||||
|
['docker pull %s' % (ci_lib.image_for_distro(distro),)]
|
||||||
|
for distro in ci_lib.DISTROS
|
||||||
|
)
|
||||||
|
|
||||||
|
ci_lib.run_batches(batches)
|
@ -0,0 +1,63 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# Run tests/ansible/all.yml under Ansible and Ansible-Mitogen
|
||||||
|
|
||||||
|
import glob
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
from ci_lib import run
|
||||||
|
|
||||||
|
|
||||||
|
TESTS_DIR = os.path.join(ci_lib.GIT_ROOT, 'tests/ansible')
|
||||||
|
HOSTS_DIR = os.path.join(ci_lib.TMP, 'hosts')
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('unit_tests'):
|
||||||
|
os.environ['SKIP_MITOGEN'] = '1'
|
||||||
|
ci_lib.run('./run_tests -v')
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('docker_setup'):
|
||||||
|
containers = ci_lib.make_containers()
|
||||||
|
ci_lib.start_containers(containers)
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('job_setup'):
|
||||||
|
# Don't set -U as that will upgrade Paramiko to a non-2.6 compatible version.
|
||||||
|
run("pip install -q ansible==%s", ci_lib.ANSIBLE_VERSION)
|
||||||
|
|
||||||
|
os.chdir(TESTS_DIR)
|
||||||
|
os.chmod('../data/docker/mitogen__has_sudo_pubkey.key', int('0600', 7))
|
||||||
|
|
||||||
|
run("mkdir %s", HOSTS_DIR)
|
||||||
|
for path in glob.glob(TESTS_DIR + '/hosts/*'):
|
||||||
|
if not path.endswith('default.hosts'):
|
||||||
|
run("ln -s %s %s", path, HOSTS_DIR)
|
||||||
|
|
||||||
|
inventory_path = os.path.join(HOSTS_DIR, 'target')
|
||||||
|
with open(inventory_path, 'w') as fp:
|
||||||
|
fp.write('[test-targets]\n')
|
||||||
|
fp.writelines(
|
||||||
|
"%(name)s "
|
||||||
|
"ansible_host=%(hostname)s "
|
||||||
|
"ansible_port=%(port)s "
|
||||||
|
"ansible_python_interpreter=%(python_path)s "
|
||||||
|
"ansible_user=mitogen__has_sudo_nopw "
|
||||||
|
"ansible_password=has_sudo_nopw_password"
|
||||||
|
"\n"
|
||||||
|
% container
|
||||||
|
for container in containers
|
||||||
|
)
|
||||||
|
|
||||||
|
ci_lib.dump_file(inventory_path)
|
||||||
|
|
||||||
|
if not ci_lib.exists_in_path('sshpass'):
|
||||||
|
run("sudo apt-get update")
|
||||||
|
run("sudo apt-get install -y sshpass")
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('ansible'):
|
||||||
|
playbook = os.environ.get('PLAYBOOK', 'all.yml')
|
||||||
|
run('./run_ansible_playbook.py %s -i "%s" %s',
|
||||||
|
playbook, HOSTS_DIR, ' '.join(sys.argv[1:]))
|
@ -0,0 +1,83 @@
|
|||||||
|
# Python package
|
||||||
|
# Create and test a Python package on multiple Python versions.
|
||||||
|
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
|
||||||
|
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
|
||||||
|
- job: 'MitogenTests'
|
||||||
|
pool:
|
||||||
|
vmImage: 'Ubuntu 16.04'
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
Mitogen27Debian_27:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: mitogen
|
||||||
|
DISTRO: debian
|
||||||
|
|
||||||
|
MitogenPy27CentOS6_26:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: mitogen
|
||||||
|
DISTRO: centos6
|
||||||
|
|
||||||
|
#Py26CentOS7:
|
||||||
|
#python.version: '2.7'
|
||||||
|
#MODE: mitogen
|
||||||
|
#DISTRO: centos6
|
||||||
|
|
||||||
|
Mitogen36CentOS6_26:
|
||||||
|
python.version: '3.6'
|
||||||
|
MODE: mitogen
|
||||||
|
DISTRO: centos6
|
||||||
|
|
||||||
|
DebOps_2460_27_27:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: debops_common
|
||||||
|
VER: 2.4.6.0
|
||||||
|
|
||||||
|
DebOps_262_36_27:
|
||||||
|
python.version: '3.6'
|
||||||
|
MODE: debops_common
|
||||||
|
VER: 2.6.2
|
||||||
|
|
||||||
|
Ansible_2460_26:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: ansible
|
||||||
|
VER: 2.4.6.0
|
||||||
|
|
||||||
|
Ansible_262_26:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: ansible
|
||||||
|
VER: 2.6.2
|
||||||
|
|
||||||
|
Ansible_2460_36:
|
||||||
|
python.version: '3.6'
|
||||||
|
MODE: ansible
|
||||||
|
VER: 2.4.6.0
|
||||||
|
|
||||||
|
Ansible_262_36:
|
||||||
|
python.version: '3.6'
|
||||||
|
MODE: ansible
|
||||||
|
VER: 2.6.2
|
||||||
|
|
||||||
|
Vanilla_262_27:
|
||||||
|
python.version: '2.7'
|
||||||
|
MODE: ansible
|
||||||
|
VER: 2.6.2
|
||||||
|
DISTROS: debian
|
||||||
|
STRATEGY: linear
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- task: UsePythonVersion@0
|
||||||
|
inputs:
|
||||||
|
versionSpec: '$(python.version)'
|
||||||
|
architecture: 'x64'
|
||||||
|
|
||||||
|
- script: .ci/prep_azure.py
|
||||||
|
displayName: "Install requirements."
|
||||||
|
|
||||||
|
- script: .ci/$(MODE)_install.py
|
||||||
|
displayName: "Install requirements."
|
||||||
|
|
||||||
|
- script: .ci/$(MODE)_tests.py
|
||||||
|
displayName: Run tests.
|
@ -0,0 +1,222 @@
|
|||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import atexit
|
||||||
|
import os
|
||||||
|
import shlex
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
try:
|
||||||
|
import urlparse
|
||||||
|
except ImportError:
|
||||||
|
import urllib.parse as urlparse
|
||||||
|
|
||||||
|
os.chdir(
|
||||||
|
os.path.join(
|
||||||
|
os.path.dirname(__file__),
|
||||||
|
'..'
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
#
|
||||||
|
# check_output() monkeypatch cutpasted from testlib.py
|
||||||
|
#
|
||||||
|
|
||||||
|
def subprocess__check_output(*popenargs, **kwargs):
|
||||||
|
# Missing from 2.6.
|
||||||
|
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
|
||||||
|
output, _ = process.communicate()
|
||||||
|
retcode = process.poll()
|
||||||
|
if retcode:
|
||||||
|
cmd = kwargs.get("args")
|
||||||
|
if cmd is None:
|
||||||
|
cmd = popenargs[0]
|
||||||
|
raise subprocess.CalledProcessError(retcode, cmd)
|
||||||
|
return output
|
||||||
|
|
||||||
|
if not hasattr(subprocess, 'check_output'):
|
||||||
|
subprocess.check_output = subprocess__check_output
|
||||||
|
|
||||||
|
|
||||||
|
# -----------------
|
||||||
|
|
||||||
|
# Force stdout FD 1 to be a pipe, so tools like pip don't spam progress bars.
|
||||||
|
|
||||||
|
if 'TRAVIS_HOME' in os.environ:
|
||||||
|
proc = subprocess.Popen(
|
||||||
|
args=['stdbuf', '-oL', 'cat'],
|
||||||
|
stdin=subprocess.PIPE
|
||||||
|
)
|
||||||
|
|
||||||
|
os.dup2(proc.stdin.fileno(), 1)
|
||||||
|
os.dup2(proc.stdin.fileno(), 2)
|
||||||
|
|
||||||
|
def cleanup_travis_junk(stdout=sys.stdout, stderr=sys.stderr, proc=proc):
|
||||||
|
stdout.close()
|
||||||
|
stderr.close()
|
||||||
|
proc.terminate()
|
||||||
|
|
||||||
|
atexit.register(cleanup_travis_junk)
|
||||||
|
|
||||||
|
# -----------------
|
||||||
|
|
||||||
|
def _argv(s, *args):
|
||||||
|
if args:
|
||||||
|
s %= args
|
||||||
|
return shlex.split(s)
|
||||||
|
|
||||||
|
|
||||||
|
def run(s, *args, **kwargs):
|
||||||
|
argv = ['/usr/bin/time', '--'] + _argv(s, *args)
|
||||||
|
print('Running: %s' % (argv,))
|
||||||
|
ret = subprocess.check_call(argv, **kwargs)
|
||||||
|
print('Finished running: %s' % (argv,))
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def run_batches(batches):
|
||||||
|
combine = lambda batch: 'set -x; ' + (' && '.join(
|
||||||
|
'( %s; )' % (cmd,)
|
||||||
|
for cmd in batch
|
||||||
|
))
|
||||||
|
|
||||||
|
procs = [
|
||||||
|
subprocess.Popen(combine(batch), shell=True)
|
||||||
|
for batch in batches
|
||||||
|
]
|
||||||
|
assert [proc.wait() for proc in procs] == [0] * len(procs)
|
||||||
|
|
||||||
|
|
||||||
|
def get_output(s, *args, **kwargs):
|
||||||
|
argv = _argv(s, *args)
|
||||||
|
print('Running: %s' % (argv,))
|
||||||
|
return subprocess.check_output(argv, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def exists_in_path(progname):
|
||||||
|
return any(os.path.exists(os.path.join(dirname, progname))
|
||||||
|
for dirname in os.environ['PATH'].split(os.pathsep))
|
||||||
|
|
||||||
|
|
||||||
|
class TempDir(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.path = tempfile.mkdtemp(prefix='mitogen_ci_lib')
|
||||||
|
atexit.register(self.destroy)
|
||||||
|
|
||||||
|
def destroy(self, rmtree=shutil.rmtree):
|
||||||
|
rmtree(self.path)
|
||||||
|
|
||||||
|
|
||||||
|
class Fold(object):
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
print('travis_fold:start:%s' % (self.name))
|
||||||
|
|
||||||
|
def __exit__(self, _1, _2, _3):
|
||||||
|
print('')
|
||||||
|
print('travis_fold:end:%s' % (self.name))
|
||||||
|
|
||||||
|
|
||||||
|
os.environ.setdefault('ANSIBLE_STRATEGY',
|
||||||
|
os.environ.get('STRATEGY', 'mitogen_linear'))
|
||||||
|
ANSIBLE_VERSION = os.environ.get('VER', '2.6.2')
|
||||||
|
GIT_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
|
||||||
|
DISTRO = os.environ.get('DISTRO', 'debian')
|
||||||
|
DISTROS = os.environ.get('DISTROS', 'debian centos6 centos7').split()
|
||||||
|
TARGET_COUNT = int(os.environ.get('TARGET_COUNT', '2'))
|
||||||
|
BASE_PORT = 2200
|
||||||
|
TMP = TempDir().path
|
||||||
|
|
||||||
|
os.environ['PYTHONDONTWRITEBYTECODE'] = 'x'
|
||||||
|
os.environ['PYTHONPATH'] = '%s:%s' % (
|
||||||
|
os.environ.get('PYTHONPATH', ''),
|
||||||
|
GIT_ROOT
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_docker_hostname():
|
||||||
|
url = os.environ.get('DOCKER_HOST')
|
||||||
|
if url in (None, 'http+docker://localunixsocket'):
|
||||||
|
return 'localhost'
|
||||||
|
|
||||||
|
parsed = urlparse.urlparse(url)
|
||||||
|
return parsed.netloc.partition(':')[0]
|
||||||
|
|
||||||
|
|
||||||
|
def image_for_distro(distro):
|
||||||
|
return 'mitogen/%s-test' % (distro.partition('-')[0],)
|
||||||
|
|
||||||
|
|
||||||
|
def make_containers():
|
||||||
|
docker_hostname = get_docker_hostname()
|
||||||
|
firstbit = lambda s: (s+'-').split('-')[0]
|
||||||
|
secondbit = lambda s: (s+'-').split('-')[1]
|
||||||
|
|
||||||
|
i = 1
|
||||||
|
lst = []
|
||||||
|
|
||||||
|
for distro in DISTROS:
|
||||||
|
distro, star, count = distro.partition('*')
|
||||||
|
if star:
|
||||||
|
count = int(count)
|
||||||
|
else:
|
||||||
|
count = 1
|
||||||
|
|
||||||
|
for x in range(count):
|
||||||
|
lst.append({
|
||||||
|
"distro": firstbit(distro),
|
||||||
|
"name": "target-%s-%s" % (distro, i),
|
||||||
|
"hostname": docker_hostname,
|
||||||
|
"port": BASE_PORT + i,
|
||||||
|
"python_path": (
|
||||||
|
'/usr/bin/python3'
|
||||||
|
if secondbit(distro) == 'py3'
|
||||||
|
else '/usr/bin/python'
|
||||||
|
)
|
||||||
|
})
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
return lst
|
||||||
|
|
||||||
|
|
||||||
|
def start_containers(containers):
|
||||||
|
if os.environ.get('KEEP'):
|
||||||
|
return
|
||||||
|
|
||||||
|
run_batches([
|
||||||
|
[
|
||||||
|
"docker rm -f %(name)s || true" % container,
|
||||||
|
"docker run "
|
||||||
|
"--rm "
|
||||||
|
"--detach "
|
||||||
|
"--publish 0.0.0.0:%(port)s:22/tcp "
|
||||||
|
"--hostname=%(name)s "
|
||||||
|
"--name=%(name)s "
|
||||||
|
"mitogen/%(distro)s-test "
|
||||||
|
% container
|
||||||
|
]
|
||||||
|
for container in containers
|
||||||
|
])
|
||||||
|
return containers
|
||||||
|
|
||||||
|
|
||||||
|
def dump_file(path):
|
||||||
|
print()
|
||||||
|
print('--- %s ---' % (path,))
|
||||||
|
print()
|
||||||
|
with open(path, 'r') as fp:
|
||||||
|
print(fp.read().rstrip())
|
||||||
|
print('---')
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
# SSH passes these through to the container when run interactively, causing
|
||||||
|
# stdout to get messed up with libc warnings.
|
||||||
|
os.environ.pop('LANG', None)
|
||||||
|
os.environ.pop('LC_ALL', None)
|
@ -0,0 +1,18 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
# Naturally DebOps only supports Debian.
|
||||||
|
ci_lib.DISTROS = ['debian']
|
||||||
|
|
||||||
|
ci_lib.run_batches([
|
||||||
|
[
|
||||||
|
# Must be installed separately, as PyNACL indirect requirement causes
|
||||||
|
# newer version to be installed if done in a single pip run.
|
||||||
|
'pip install "pycparser<2.19"',
|
||||||
|
'pip install -qqqU debops==0.7.2 ansible==%s' % ci_lib.ANSIBLE_VERSION,
|
||||||
|
],
|
||||||
|
[
|
||||||
|
'docker pull %s' % (ci_lib.image_for_distro('debian'),),
|
||||||
|
],
|
||||||
|
])
|
@ -0,0 +1,79 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
import os
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
|
||||||
|
# DebOps only supports Debian.
|
||||||
|
ci_lib.DISTROS = ['debian'] * ci_lib.TARGET_COUNT
|
||||||
|
|
||||||
|
project_dir = os.path.join(ci_lib.TMP, 'project')
|
||||||
|
key_file = os.path.join(
|
||||||
|
ci_lib.GIT_ROOT,
|
||||||
|
'tests/data/docker/mitogen__has_sudo_pubkey.key',
|
||||||
|
)
|
||||||
|
vars_path = 'ansible/inventory/group_vars/debops_all_hosts.yml'
|
||||||
|
inventory_path = 'ansible/inventory/hosts'
|
||||||
|
docker_hostname = ci_lib.get_docker_hostname()
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('docker_setup'):
|
||||||
|
containers = ci_lib.make_containers()
|
||||||
|
ci_lib.start_containers(containers)
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('job_setup'):
|
||||||
|
ci_lib.run('debops-init %s', project_dir)
|
||||||
|
os.chdir(project_dir)
|
||||||
|
|
||||||
|
with open('.debops.cfg', 'w') as fp:
|
||||||
|
fp.write(
|
||||||
|
"[ansible defaults]\n"
|
||||||
|
"strategy_plugins = %s/ansible_mitogen/plugins/strategy\n"
|
||||||
|
"strategy = mitogen_linear\n"
|
||||||
|
% (ci_lib.GIT_ROOT,)
|
||||||
|
)
|
||||||
|
|
||||||
|
ci_lib.run('chmod go= %s', key_file)
|
||||||
|
with open(vars_path, 'w') as fp:
|
||||||
|
fp.write(
|
||||||
|
"ansible_python_interpreter: /usr/bin/python2.7\n"
|
||||||
|
"\n"
|
||||||
|
"ansible_user: mitogen__has_sudo_pubkey\n"
|
||||||
|
"ansible_become_pass: has_sudo_pubkey_password\n"
|
||||||
|
"ansible_ssh_private_key_file: %s\n"
|
||||||
|
"\n"
|
||||||
|
# Speed up slow DH generation.
|
||||||
|
"dhparam__bits: ['128', '64']\n"
|
||||||
|
% (key_file,)
|
||||||
|
)
|
||||||
|
|
||||||
|
with open(inventory_path, 'a') as fp:
|
||||||
|
fp.writelines(
|
||||||
|
'%(name)s '
|
||||||
|
'ansible_host=%(hostname)s '
|
||||||
|
'ansible_port=%(port)d '
|
||||||
|
'ansible_python_interpreter=%(python_path)s '
|
||||||
|
'\n'
|
||||||
|
% container
|
||||||
|
for container in containers
|
||||||
|
)
|
||||||
|
|
||||||
|
print()
|
||||||
|
print(' echo --- ansible/inventory/hosts: ---')
|
||||||
|
ci_lib.run('cat ansible/inventory/hosts')
|
||||||
|
print('---')
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Now we have real host key checking, we need to turn it off
|
||||||
|
os.environ['ANSIBLE_HOST_KEY_CHECKING'] = 'False'
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('first_run'):
|
||||||
|
ci_lib.run('debops common')
|
||||||
|
|
||||||
|
|
||||||
|
with ci_lib.Fold('second_run'):
|
||||||
|
ci_lib.run('debops common')
|
@ -0,0 +1,15 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
batches = [
|
||||||
|
[
|
||||||
|
'pip install "pycparser<2.19" "idna<2.7"',
|
||||||
|
'pip install -r tests/requirements.txt',
|
||||||
|
],
|
||||||
|
[
|
||||||
|
'docker pull %s' % (ci_lib.image_for_distro(ci_lib.DISTRO),),
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
ci_lib.run_batches(batches)
|
@ -0,0 +1,14 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
batches = [
|
||||||
|
[
|
||||||
|
'docker pull %s' % (ci_lib.image_for_distro(ci_lib.DISTRO),),
|
||||||
|
],
|
||||||
|
[
|
||||||
|
'sudo tar -C / -jxvf tests/data/ubuntu-python-2.4.6.tar.bz2',
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
ci_lib.run_batches(batches)
|
@ -0,0 +1,17 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# Mitogen tests for Python 2.4.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
os.environ.update({
|
||||||
|
'NOCOVERAGE': '1',
|
||||||
|
'UNIT2': '/usr/local/python2.4.6/bin/unit2',
|
||||||
|
|
||||||
|
'MITOGEN_TEST_DISTRO': ci_lib.DISTRO,
|
||||||
|
'MITOGEN_LOG_LEVEL': 'debug',
|
||||||
|
'SKIP_ANSIBLE': '1',
|
||||||
|
})
|
||||||
|
|
||||||
|
ci_lib.run('./run_tests -v')
|
@ -0,0 +1,14 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# Run the Mitogen tests.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
os.environ.update({
|
||||||
|
'MITOGEN_TEST_DISTRO': ci_lib.DISTRO,
|
||||||
|
'MITOGEN_LOG_LEVEL': 'debug',
|
||||||
|
'SKIP_ANSIBLE': '1',
|
||||||
|
})
|
||||||
|
|
||||||
|
ci_lib.run('./run_tests -v')
|
@ -0,0 +1,22 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import ci_lib
|
||||||
|
|
||||||
|
batches = []
|
||||||
|
batches.append([
|
||||||
|
'echo force-unsafe-io | sudo tee /etc/dpkg/dpkg.cfg.d/nosync',
|
||||||
|
'sudo add-apt-repository ppa:deadsnakes/ppa',
|
||||||
|
'sudo apt-get update',
|
||||||
|
'sudo apt-get -y install python2.6 python2.6-dev libsasl2-dev libldap2-dev',
|
||||||
|
])
|
||||||
|
|
||||||
|
batches.append([
|
||||||
|
'pip install -r dev_requirements.txt',
|
||||||
|
])
|
||||||
|
|
||||||
|
batches.extend(
|
||||||
|
['docker pull %s' % (ci_lib.image_for_distro(distro),)]
|
||||||
|
for distro in ci_lib.DISTROS
|
||||||
|
)
|
||||||
|
|
||||||
|
ci_lib.run_batches(batches)
|
@ -1,15 +1,21 @@
|
|||||||
|
|
||||||
|
Please drag-drop large logs as text file attachments.
|
||||||
|
|
||||||
Feel free to write an issue in your preferred format, however if in doubt, use
|
Feel free to write an issue in your preferred format, however if in doubt, use
|
||||||
the following checklist as a guide for what to include.
|
the following checklist as a guide for what to include.
|
||||||
|
|
||||||
* Have you tried the latest master version from Git?
|
* Have you tried the latest master version from Git?
|
||||||
|
* Do you have some idea of what the underlying problem may be?
|
||||||
|
https://mitogen.rtfd.io/en/stable/ansible.html#common-problems has
|
||||||
|
instructions to help figure out the likely cause and how to gather relevant
|
||||||
|
logs.
|
||||||
* Mention your host and target OS and versions
|
* Mention your host and target OS and versions
|
||||||
* Mention your host and target Python versions
|
* Mention your host and target Python versions
|
||||||
* If reporting a performance issue, mention the number of targets and a rough
|
* If reporting a performance issue, mention the number of targets and a rough
|
||||||
description of your workload (lots of copies, lots of tiny file edits, etc.)
|
description of your workload (lots of copies, lots of tiny file edits, etc.)
|
||||||
* If reporting a crash or hang in Ansible, please rerun with -vvvv and include
|
* If reporting a crash or hang in Ansible, please rerun with -vvv and include
|
||||||
the last 200 lines of output, along with a full copy of any traceback or
|
200 lines of output around the point of the error, along with a full copy of
|
||||||
error text in the log. Beware "-vvvv" may include secret data! Edit as
|
any traceback or error text in the log. Beware "-vvv" may include secret
|
||||||
necessary before posting.
|
data! Edit as necessary before posting.
|
||||||
* If reporting any kind of problem with Ansible, please include the Ansible
|
* If reporting any kind of problem with Ansible, please include the Ansible
|
||||||
version along with output of "ansible-config dump --only-changed".
|
version along with output of "ansible-config dump --only-changed".
|
||||||
|
@ -0,0 +1,16 @@
|
|||||||
|
|
||||||
|
Thanks for creating a PR! Here's a quick checklist to pay attention to:
|
||||||
|
|
||||||
|
* Please add an entry to docs/changelog.rst as appropriate.
|
||||||
|
|
||||||
|
* Has some new parameter been added or semantics modified somehow? Please
|
||||||
|
ensure relevant documentation is updated in docs/ansible.rst and
|
||||||
|
docs/api.rst.
|
||||||
|
|
||||||
|
* If it's for new functionality, is there at least a basic test in either
|
||||||
|
tests/ or tests/ansible/ covering it?
|
||||||
|
|
||||||
|
* If it's for a new connection method, please try to stub out the
|
||||||
|
implementation as in tests/data/stubs/, so that construction can be tested
|
||||||
|
without having a working configuration.
|
||||||
|
|
@ -1,66 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# Run tests/ansible/all.yml under Ansible and Ansible-Mitogen
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import ci_lib
|
|
||||||
from ci_lib import run
|
|
||||||
|
|
||||||
|
|
||||||
BASE_PORT = 2201
|
|
||||||
TESTS_DIR = os.path.join(ci_lib.GIT_ROOT, 'tests/ansible')
|
|
||||||
HOSTS_DIR = os.path.join(ci_lib.TMP, 'hosts')
|
|
||||||
|
|
||||||
|
|
||||||
with ci_lib.Fold('docker_setup'):
|
|
||||||
for i, distro in enumerate(ci_lib.DISTROS):
|
|
||||||
try:
|
|
||||||
run("docker rm -f target-%s", distro)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
run("""
|
|
||||||
docker run
|
|
||||||
--rm
|
|
||||||
--detach
|
|
||||||
--publish 0.0.0.0:%s:22/tcp
|
|
||||||
--hostname=target-%s
|
|
||||||
--name=target-%s
|
|
||||||
mitogen/%s-test
|
|
||||||
""", BASE_PORT + i, distro, distro, distro)
|
|
||||||
|
|
||||||
|
|
||||||
with ci_lib.Fold('job_setup'):
|
|
||||||
os.chdir(TESTS_DIR)
|
|
||||||
os.chmod('../data/docker/mitogen__has_sudo_pubkey.key', int('0600', 7))
|
|
||||||
|
|
||||||
# Don't set -U as that will upgrade Paramiko to a non-2.6 compatible version.
|
|
||||||
run("pip install -q ansible==%s", ci_lib.ANSIBLE_VERSION)
|
|
||||||
|
|
||||||
run("mkdir %s", HOSTS_DIR)
|
|
||||||
run("ln -s %s/hosts/common-hosts %s", TESTS_DIR, HOSTS_DIR)
|
|
||||||
|
|
||||||
with open(os.path.join(HOSTS_DIR, 'target'), 'w') as fp:
|
|
||||||
fp.write('[test-targets]\n')
|
|
||||||
for i, distro in enumerate(ci_lib.DISTROS):
|
|
||||||
fp.write("target-%s "
|
|
||||||
"ansible_host=%s "
|
|
||||||
"ansible_port=%s "
|
|
||||||
"ansible_user=mitogen__has_sudo_nopw "
|
|
||||||
"ansible_password=has_sudo_nopw_password"
|
|
||||||
"\n" % (
|
|
||||||
distro,
|
|
||||||
ci_lib.DOCKER_HOSTNAME,
|
|
||||||
BASE_PORT + i,
|
|
||||||
))
|
|
||||||
|
|
||||||
# Build the binaries.
|
|
||||||
# run("make -C %s", TESTS_DIR)
|
|
||||||
if not ci_lib.exists_in_path('sshpass'):
|
|
||||||
run("sudo apt-get update")
|
|
||||||
run("sudo apt-get install -y sshpass")
|
|
||||||
|
|
||||||
|
|
||||||
with ci_lib.Fold('ansible'):
|
|
||||||
run('/usr/bin/time ./run_ansible_playbook.sh all.yml -i "%s" %s',
|
|
||||||
HOSTS_DIR, ' '.join(sys.argv[1:]))
|
|
@ -1,102 +0,0 @@
|
|||||||
|
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import atexit
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import shlex
|
|
||||||
import shutil
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
import os
|
|
||||||
os.system('curl -H Metadata-Flavor:Google http://metadata.google.internal/computeMetadata/v1/instance/machine-type')
|
|
||||||
|
|
||||||
#
|
|
||||||
# check_output() monkeypatch cutpasted from testlib.py
|
|
||||||
#
|
|
||||||
|
|
||||||
def subprocess__check_output(*popenargs, **kwargs):
|
|
||||||
# Missing from 2.6.
|
|
||||||
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
|
|
||||||
output, _ = process.communicate()
|
|
||||||
retcode = process.poll()
|
|
||||||
if retcode:
|
|
||||||
cmd = kwargs.get("args")
|
|
||||||
if cmd is None:
|
|
||||||
cmd = popenargs[0]
|
|
||||||
raise subprocess.CalledProcessError(retcode, cmd)
|
|
||||||
return output
|
|
||||||
|
|
||||||
if not hasattr(subprocess, 'check_output'):
|
|
||||||
subprocess.check_output = subprocess__check_output
|
|
||||||
|
|
||||||
# -----------------
|
|
||||||
|
|
||||||
def _argv(s, *args):
|
|
||||||
if args:
|
|
||||||
s %= args
|
|
||||||
return shlex.split(s)
|
|
||||||
|
|
||||||
|
|
||||||
def run(s, *args, **kwargs):
|
|
||||||
argv = _argv(s, *args)
|
|
||||||
print('Running: %s' % (argv,))
|
|
||||||
return subprocess.check_call(argv, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_output(s, *args, **kwargs):
|
|
||||||
argv = _argv(s, *args)
|
|
||||||
print('Running: %s' % (argv,))
|
|
||||||
return subprocess.check_output(argv, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def exists_in_path(progname):
|
|
||||||
return any(os.path.exists(os.path.join(dirname, progname))
|
|
||||||
for dirname in os.environ['PATH'].split(os.pathsep))
|
|
||||||
|
|
||||||
|
|
||||||
class TempDir(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.path = tempfile.mkdtemp(prefix='mitogen_ci_lib')
|
|
||||||
atexit.register(self.destroy)
|
|
||||||
|
|
||||||
def destroy(self, rmtree=shutil.rmtree):
|
|
||||||
rmtree(self.path)
|
|
||||||
|
|
||||||
|
|
||||||
class Fold(object):
|
|
||||||
def __init__(self, name):
|
|
||||||
self.name = name
|
|
||||||
|
|
||||||
def __enter__(self):
|
|
||||||
print('travis_fold:start:%s' % (self.name))
|
|
||||||
|
|
||||||
def __exit__(self, _1, _2, _3):
|
|
||||||
print('')
|
|
||||||
print('travis_fold:end:%s' % (self.name))
|
|
||||||
|
|
||||||
|
|
||||||
os.environ.setdefault('ANSIBLE_STRATEGY',
|
|
||||||
os.environ.get('STRATEGY', 'mitogen_linear'))
|
|
||||||
ANSIBLE_VERSION = os.environ.get('VER', '2.6.2')
|
|
||||||
GIT_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
|
|
||||||
DISTROS = os.environ.get('DISTROS', 'debian centos6 centos7').split()
|
|
||||||
TMP = TempDir().path
|
|
||||||
|
|
||||||
os.environ['PYTHONDONTWRITEBYTECODE'] = 'x'
|
|
||||||
os.environ['PYTHONPATH'] = '%s:%s' % (
|
|
||||||
os.environ.get('PYTHONPATH', ''),
|
|
||||||
GIT_ROOT
|
|
||||||
)
|
|
||||||
|
|
||||||
DOCKER_HOSTNAME = subprocess.check_output([
|
|
||||||
sys.executable,
|
|
||||||
os.path.join(GIT_ROOT, 'tests/show_docker_hostname.py'),
|
|
||||||
]).decode().strip()
|
|
||||||
|
|
||||||
# SSH passes these through to the container when run interactively, causing
|
|
||||||
# stdout to get messed up with libc warnings.
|
|
||||||
os.environ.pop('LANG', None)
|
|
||||||
os.environ.pop('LC_ALL', None)
|
|
@ -1,90 +0,0 @@
|
|||||||
#!/bin/bash -ex
|
|
||||||
# Run some invocations of DebOps.
|
|
||||||
|
|
||||||
TMPDIR="/tmp/debops-$$"
|
|
||||||
TRAVIS_BUILD_DIR="${TRAVIS_BUILD_DIR:-`pwd`}"
|
|
||||||
TARGET_COUNT="${TARGET_COUNT:-2}"
|
|
||||||
ANSIBLE_VERSION="${VER:-2.6.1}"
|
|
||||||
DISTRO=debian # Naturally DebOps only supports Debian.
|
|
||||||
|
|
||||||
export PYTHONPATH="${PYTHONPATH}:${TRAVIS_BUILD_DIR}"
|
|
||||||
|
|
||||||
function on_exit()
|
|
||||||
{
|
|
||||||
echo travis_fold:start:cleanup
|
|
||||||
[ "$KEEP" ] || {
|
|
||||||
rm -rf "$TMPDIR" || true
|
|
||||||
for i in $(seq $TARGET_COUNT)
|
|
||||||
do
|
|
||||||
docker kill target$i || true
|
|
||||||
done
|
|
||||||
}
|
|
||||||
echo travis_fold:end:cleanup
|
|
||||||
}
|
|
||||||
|
|
||||||
trap on_exit EXIT
|
|
||||||
mkdir "$TMPDIR"
|
|
||||||
|
|
||||||
|
|
||||||
echo travis_fold:start:job_setup
|
|
||||||
pip install -qqqU debops==0.7.2 ansible==${ANSIBLE_VERSION}
|
|
||||||
debops-init "$TMPDIR/project"
|
|
||||||
cd "$TMPDIR/project"
|
|
||||||
|
|
||||||
cat > .debops.cfg <<-EOF
|
|
||||||
[ansible defaults]
|
|
||||||
strategy_plugins = ${TRAVIS_BUILD_DIR}/ansible_mitogen/plugins/strategy
|
|
||||||
strategy = mitogen_linear
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod go= ${TRAVIS_BUILD_DIR}/tests/data/docker/mitogen__has_sudo_pubkey.key
|
|
||||||
|
|
||||||
cat > ansible/inventory/group_vars/debops_all_hosts.yml <<-EOF
|
|
||||||
ansible_python_interpreter: /usr/bin/python2.7
|
|
||||||
|
|
||||||
ansible_user: mitogen__has_sudo_pubkey
|
|
||||||
ansible_become_pass: has_sudo_pubkey_password
|
|
||||||
ansible_ssh_private_key_file: ${TRAVIS_BUILD_DIR}/tests/data/docker/mitogen__has_sudo_pubkey.key
|
|
||||||
|
|
||||||
# Speed up slow DH generation.
|
|
||||||
dhparam__bits: ["128", "64"]
|
|
||||||
EOF
|
|
||||||
|
|
||||||
DOCKER_HOSTNAME="$(python ${TRAVIS_BUILD_DIR}/tests/show_docker_hostname.py)"
|
|
||||||
|
|
||||||
for i in $(seq $TARGET_COUNT)
|
|
||||||
do
|
|
||||||
port=$((2200 + $i))
|
|
||||||
docker run \
|
|
||||||
--rm \
|
|
||||||
--detach \
|
|
||||||
--publish 0.0.0.0:$port:22/tcp \
|
|
||||||
--name=target$i \
|
|
||||||
mitogen/${DISTRO}-test
|
|
||||||
|
|
||||||
echo \
|
|
||||||
target$i \
|
|
||||||
ansible_host=$DOCKER_HOSTNAME \
|
|
||||||
ansible_port=$port \
|
|
||||||
>> ansible/inventory/hosts
|
|
||||||
done
|
|
||||||
|
|
||||||
echo
|
|
||||||
echo --- ansible/inventory/hosts: ----
|
|
||||||
cat ansible/inventory/hosts
|
|
||||||
echo ---
|
|
||||||
|
|
||||||
# Now we have real host key checking, we need to turn it off. :)
|
|
||||||
export ANSIBLE_HOST_KEY_CHECKING=False
|
|
||||||
|
|
||||||
echo travis_fold:end:job_setup
|
|
||||||
|
|
||||||
|
|
||||||
echo travis_fold:start:first_run
|
|
||||||
/usr/bin/time debops common "$@"
|
|
||||||
echo travis_fold:end:first_run
|
|
||||||
|
|
||||||
|
|
||||||
echo travis_fold:start:second_run
|
|
||||||
/usr/bin/time debops common "$@"
|
|
||||||
echo travis_fold:end:second_run
|
|
@ -1,5 +0,0 @@
|
|||||||
#!/bin/bash -ex
|
|
||||||
# Run the Mitogen tests.
|
|
||||||
|
|
||||||
MITOGEN_TEST_DISTRO="${DISTRO:-debian}"
|
|
||||||
MITOGEN_LOG_LEVEL=debug PYTHONPATH=. ${TRAVIS_BUILD_DIR}/run_tests -vvv
|
|
@ -1,4 +1,13 @@
|
|||||||
|
|
||||||
# Mitogen
|
# Mitogen
|
||||||
|
|
||||||
<!-- [![Build Status](https://travis-ci.org/dw/mitogen.png?branch=master)](https://travis-ci.org/dw/mitogen}) -->
|
<!-- [![Build Status](https://travis-ci.org/dw/mitogen.png?branch=master)](https://travis-ci.org/dw/mitogen}) -->
|
||||||
<a href="https://mitogen.readthedocs.io/">Please see the documentation</a>.
|
<a href="https://mitogen.readthedocs.io/">Please see the documentation</a>.
|
||||||
|
|
||||||
|
![](https://i.imgur.com/eBM6LhJ.gif)
|
||||||
|
|
||||||
|
[![Total alerts](https://img.shields.io/lgtm/alerts/g/dw/mitogen.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/dw/mitogen/alerts/)
|
||||||
|
|
||||||
|
[![Build Status](https://travis-ci.org/dw/mitogen.svg?branch=master)](https://travis-ci.org/dw/mitogen)
|
||||||
|
|
||||||
|
[![Pipelines Status](https://dev.azure.com/dw-mitogen/Mitogen/_apis/build/status/dw.mitogen?branchName=master)](https://dev.azure.com/dw-mitogen/Mitogen/_build/latest?definitionId=1?branchName=master)
|
||||||
|
@ -0,0 +1,241 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
"""
|
||||||
|
As Mitogen separates asynchronous IO out to a broker thread, communication
|
||||||
|
necessarily involves context switching and waking that thread. When application
|
||||||
|
threads and the broker share a CPU, this can be almost invisibly fast - around
|
||||||
|
25 microseconds for a full A->B->A round-trip.
|
||||||
|
|
||||||
|
However when threads are scheduled on different CPUs, round-trip delays
|
||||||
|
regularly vary wildly, and easily into milliseconds. Many contributing factors
|
||||||
|
exist, not least scenarios like:
|
||||||
|
|
||||||
|
1. A is preempted immediately after waking B, but before releasing the GIL.
|
||||||
|
2. B wakes from IO wait only to immediately enter futex wait.
|
||||||
|
3. A may wait 10ms or more for another timeslice, as the scheduler on its CPU
|
||||||
|
runs threads unrelated to its transaction (i.e. not B), wake only to release
|
||||||
|
its GIL, before entering IO sleep waiting for a reply from B, which cannot
|
||||||
|
exist yet.
|
||||||
|
4. B wakes, acquires GIL, performs work, and sends reply to A, causing it to
|
||||||
|
wake. B is preempted before releasing GIL.
|
||||||
|
5. A wakes from IO wait only to immediately enter futex wait.
|
||||||
|
6. B may wait 10ms or more for another timeslice, wake only to release its GIL,
|
||||||
|
before sleeping again.
|
||||||
|
7. A wakes, acquires GIL, finally receives reply.
|
||||||
|
|
||||||
|
Per above if we are unlucky, on an even moderately busy machine it is possible
|
||||||
|
to lose milliseconds just in scheduling delay, and the effect is compounded
|
||||||
|
when pairs of threads in process A are communicating with pairs of threads in
|
||||||
|
process B using the same scheme, such as when Ansible WorkerProcess is
|
||||||
|
communicating with ContextService in the connection multiplexer. In the worst
|
||||||
|
case it could involve 4 threads working in lockstep spread across 4 busy CPUs.
|
||||||
|
|
||||||
|
Since multithreading in Python is essentially useless except for waiting on IO
|
||||||
|
due to the presence of the GIL, at least in Ansible there is no good reason for
|
||||||
|
threads in the same process to run on distinct CPUs - they always operate in
|
||||||
|
lockstep due to the GIL, and are thus vulnerable to issues like above.
|
||||||
|
|
||||||
|
Linux lacks any natural API to describe what we want, it only permits
|
||||||
|
individual threads to be constrained to run on specific CPUs, and for that
|
||||||
|
constraint to be inherited by new threads and forks of the constrained thread.
|
||||||
|
|
||||||
|
This module therefore implements a CPU pinning policy for Ansible processes,
|
||||||
|
providing methods that should be called early in any new process, either to
|
||||||
|
rebalance which CPU it is pinned to, or in the case of subprocesses, to remove
|
||||||
|
the pinning entirely. It is likely to require ongoing tweaking, since pinning
|
||||||
|
necessarily involves preventing the scheduler from making load balancing
|
||||||
|
decisions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import ctypes
|
||||||
|
import mmap
|
||||||
|
import multiprocessing
|
||||||
|
import os
|
||||||
|
import struct
|
||||||
|
|
||||||
|
import mitogen.parent
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
_libc = ctypes.CDLL(None, use_errno=True)
|
||||||
|
_strerror = _libc.strerror
|
||||||
|
_strerror.restype = ctypes.c_char_p
|
||||||
|
_pthread_mutex_init = _libc.pthread_mutex_init
|
||||||
|
_pthread_mutex_lock = _libc.pthread_mutex_lock
|
||||||
|
_pthread_mutex_unlock = _libc.pthread_mutex_unlock
|
||||||
|
_sched_setaffinity = _libc.sched_setaffinity
|
||||||
|
except (OSError, AttributeError):
|
||||||
|
_libc = None
|
||||||
|
_strerror = None
|
||||||
|
_pthread_mutex_init = None
|
||||||
|
_pthread_mutex_lock = None
|
||||||
|
_pthread_mutex_unlock = None
|
||||||
|
_sched_setaffinity = None
|
||||||
|
|
||||||
|
|
||||||
|
class pthread_mutex_t(ctypes.Structure):
|
||||||
|
"""
|
||||||
|
Wrap pthread_mutex_t to allow storing a lock in shared memory.
|
||||||
|
"""
|
||||||
|
_fields_ = [
|
||||||
|
('data', ctypes.c_uint8 * 512),
|
||||||
|
]
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
if _pthread_mutex_init(self.data, 0):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
def acquire(self):
|
||||||
|
if _pthread_mutex_lock(self.data):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
if _pthread_mutex_unlock(self.data):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
|
||||||
|
class State(ctypes.Structure):
|
||||||
|
"""
|
||||||
|
Contents of shared memory segment. This allows :meth:`Manager.assign` to be
|
||||||
|
called from any child, since affinity assignment must happen from within
|
||||||
|
the context of the new child process.
|
||||||
|
"""
|
||||||
|
_fields_ = [
|
||||||
|
('lock', pthread_mutex_t),
|
||||||
|
('counter', ctypes.c_uint8),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
class Policy(object):
|
||||||
|
"""
|
||||||
|
Process affinity policy.
|
||||||
|
"""
|
||||||
|
def assign_controller(self):
|
||||||
|
"""
|
||||||
|
Assign the Ansible top-level policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_muxprocess(self):
|
||||||
|
"""
|
||||||
|
Assign the MuxProcess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_worker(self):
|
||||||
|
"""
|
||||||
|
Assign the WorkerProcess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_subprocess(self):
|
||||||
|
"""
|
||||||
|
Assign the helper subprocess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class LinuxPolicy(Policy):
|
||||||
|
"""
|
||||||
|
:class:`Policy` for Linux machines. The scheme here was tested on an
|
||||||
|
otherwise idle 16 thread machine.
|
||||||
|
|
||||||
|
- The connection multiplexer is pinned to CPU 0.
|
||||||
|
- The Ansible top-level (strategy) is pinned to CPU 1.
|
||||||
|
- WorkerProcesses are pinned sequentually to 2..N, wrapping around when no
|
||||||
|
more CPUs exist.
|
||||||
|
- Children such as SSH may be scheduled on any CPU except 0/1.
|
||||||
|
|
||||||
|
If the machine has less than 4 cores available, the top-level and workers
|
||||||
|
are pinned between CPU 2..N, i.e. no CPU is reserved for the top-level
|
||||||
|
process.
|
||||||
|
|
||||||
|
This could at least be improved by having workers pinned to independent
|
||||||
|
cores, before reusing the second hyperthread of an existing core.
|
||||||
|
|
||||||
|
A hook is installed that causes :meth:`reset` to run in the child of any
|
||||||
|
process created with :func:`mitogen.parent.detach_popen`, ensuring
|
||||||
|
CPU-intensive children like SSH are not forced to share the same core as
|
||||||
|
the (otherwise potentially very busy) parent.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
self.mem = mmap.mmap(-1, 4096)
|
||||||
|
self.state = State.from_buffer(self.mem)
|
||||||
|
self.state.lock.init()
|
||||||
|
if self._cpu_count() < 4:
|
||||||
|
self._reserve_mask = 3
|
||||||
|
self._reserve_shift = 2
|
||||||
|
self._reserve_controller = True
|
||||||
|
else:
|
||||||
|
self._reserve_mask = 1
|
||||||
|
self._reserve_shift = 1
|
||||||
|
self._reserve_controller = False
|
||||||
|
|
||||||
|
def _set_affinity(self, mask):
|
||||||
|
mitogen.parent._preexec_hook = self._clear
|
||||||
|
s = struct.pack('L', mask)
|
||||||
|
_sched_setaffinity(os.getpid(), len(s), s)
|
||||||
|
|
||||||
|
def _cpu_count(self):
|
||||||
|
return multiprocessing.cpu_count()
|
||||||
|
|
||||||
|
def _balance(self):
|
||||||
|
self.state.lock.acquire()
|
||||||
|
try:
|
||||||
|
n = self.state.counter
|
||||||
|
self.state.counter += 1
|
||||||
|
finally:
|
||||||
|
self.state.lock.release()
|
||||||
|
|
||||||
|
self._set_cpu(self._reserve_shift + (
|
||||||
|
(n % max(1, (self._cpu_count() - self._reserve_shift)))
|
||||||
|
))
|
||||||
|
|
||||||
|
def _set_cpu(self, cpu):
|
||||||
|
self._set_affinity(1 << cpu)
|
||||||
|
|
||||||
|
def _clear(self):
|
||||||
|
self._set_affinity(0xffffffff & ~self._reserve_mask)
|
||||||
|
|
||||||
|
def assign_controller(self):
|
||||||
|
if self._reserve_controller:
|
||||||
|
self._set_cpu(1)
|
||||||
|
else:
|
||||||
|
self._balance()
|
||||||
|
|
||||||
|
def assign_muxprocess(self):
|
||||||
|
self._set_cpu(0)
|
||||||
|
|
||||||
|
def assign_worker(self):
|
||||||
|
self._balance()
|
||||||
|
|
||||||
|
def assign_subprocess(self):
|
||||||
|
self._clear()
|
||||||
|
|
||||||
|
|
||||||
|
if _sched_setaffinity is not None:
|
||||||
|
policy = LinuxPolicy()
|
||||||
|
else:
|
||||||
|
policy = Policy()
|
@ -0,0 +1,318 @@
|
|||||||
|
r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
|
||||||
|
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
|
||||||
|
interchange format.
|
||||||
|
|
||||||
|
:mod:`simplejson` exposes an API familiar to users of the standard library
|
||||||
|
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
|
||||||
|
version of the :mod:`json` library contained in Python 2.6, but maintains
|
||||||
|
compatibility with Python 2.4 and Python 2.5 and (currently) has
|
||||||
|
significant performance advantages, even without using the optional C
|
||||||
|
extension for speedups.
|
||||||
|
|
||||||
|
Encoding basic Python object hierarchies::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
|
||||||
|
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
|
||||||
|
>>> print json.dumps("\"foo\bar")
|
||||||
|
"\"foo\bar"
|
||||||
|
>>> print json.dumps(u'\u1234')
|
||||||
|
"\u1234"
|
||||||
|
>>> print json.dumps('\\')
|
||||||
|
"\\"
|
||||||
|
>>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
|
||||||
|
{"a": 0, "b": 0, "c": 0}
|
||||||
|
>>> from StringIO import StringIO
|
||||||
|
>>> io = StringIO()
|
||||||
|
>>> json.dump(['streaming API'], io)
|
||||||
|
>>> io.getvalue()
|
||||||
|
'["streaming API"]'
|
||||||
|
|
||||||
|
Compact encoding::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':'))
|
||||||
|
'[1,2,3,{"4":5,"6":7}]'
|
||||||
|
|
||||||
|
Pretty printing::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)
|
||||||
|
>>> print '\n'.join([l.rstrip() for l in s.splitlines()])
|
||||||
|
{
|
||||||
|
"4": 5,
|
||||||
|
"6": 7
|
||||||
|
}
|
||||||
|
|
||||||
|
Decoding JSON::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
|
||||||
|
>>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
|
||||||
|
True
|
||||||
|
>>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
|
||||||
|
True
|
||||||
|
>>> from StringIO import StringIO
|
||||||
|
>>> io = StringIO('["streaming API"]')
|
||||||
|
>>> json.load(io)[0] == 'streaming API'
|
||||||
|
True
|
||||||
|
|
||||||
|
Specializing JSON object decoding::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> def as_complex(dct):
|
||||||
|
... if '__complex__' in dct:
|
||||||
|
... return complex(dct['real'], dct['imag'])
|
||||||
|
... return dct
|
||||||
|
...
|
||||||
|
>>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
|
||||||
|
... object_hook=as_complex)
|
||||||
|
(1+2j)
|
||||||
|
>>> import decimal
|
||||||
|
>>> json.loads('1.1', parse_float=decimal.Decimal) == decimal.Decimal('1.1')
|
||||||
|
True
|
||||||
|
|
||||||
|
Specializing JSON object encoding::
|
||||||
|
|
||||||
|
>>> import simplejson as json
|
||||||
|
>>> def encode_complex(obj):
|
||||||
|
... if isinstance(obj, complex):
|
||||||
|
... return [obj.real, obj.imag]
|
||||||
|
... raise TypeError(repr(o) + " is not JSON serializable")
|
||||||
|
...
|
||||||
|
>>> json.dumps(2 + 1j, default=encode_complex)
|
||||||
|
'[2.0, 1.0]'
|
||||||
|
>>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
|
||||||
|
'[2.0, 1.0]'
|
||||||
|
>>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
|
||||||
|
'[2.0, 1.0]'
|
||||||
|
|
||||||
|
|
||||||
|
Using simplejson.tool from the shell to validate and pretty-print::
|
||||||
|
|
||||||
|
$ echo '{"json":"obj"}' | python -m simplejson.tool
|
||||||
|
{
|
||||||
|
"json": "obj"
|
||||||
|
}
|
||||||
|
$ echo '{ 1.2:3.4}' | python -m simplejson.tool
|
||||||
|
Expecting property name: line 1 column 2 (char 2)
|
||||||
|
"""
|
||||||
|
__version__ = '2.0.9'
|
||||||
|
__all__ = [
|
||||||
|
'dump', 'dumps', 'load', 'loads',
|
||||||
|
'JSONDecoder', 'JSONEncoder',
|
||||||
|
]
|
||||||
|
|
||||||
|
__author__ = 'Bob Ippolito <bob@redivi.com>'
|
||||||
|
|
||||||
|
from decoder import JSONDecoder
|
||||||
|
from encoder import JSONEncoder
|
||||||
|
|
||||||
|
_default_encoder = JSONEncoder(
|
||||||
|
skipkeys=False,
|
||||||
|
ensure_ascii=True,
|
||||||
|
check_circular=True,
|
||||||
|
allow_nan=True,
|
||||||
|
indent=None,
|
||||||
|
separators=None,
|
||||||
|
encoding='utf-8',
|
||||||
|
default=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
|
||||||
|
allow_nan=True, cls=None, indent=None, separators=None,
|
||||||
|
encoding='utf-8', default=None, **kw):
|
||||||
|
"""Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
|
||||||
|
``.write()``-supporting file-like object).
|
||||||
|
|
||||||
|
If ``skipkeys`` is true then ``dict`` keys that are not basic types
|
||||||
|
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
|
||||||
|
will be skipped instead of raising a ``TypeError``.
|
||||||
|
|
||||||
|
If ``ensure_ascii`` is false, then the some chunks written to ``fp``
|
||||||
|
may be ``unicode`` instances, subject to normal Python ``str`` to
|
||||||
|
``unicode`` coercion rules. Unless ``fp.write()`` explicitly
|
||||||
|
understands ``unicode`` (as in ``codecs.getwriter()``) this is likely
|
||||||
|
to cause an error.
|
||||||
|
|
||||||
|
If ``check_circular`` is false, then the circular reference check
|
||||||
|
for container types will be skipped and a circular reference will
|
||||||
|
result in an ``OverflowError`` (or worse).
|
||||||
|
|
||||||
|
If ``allow_nan`` is false, then it will be a ``ValueError`` to
|
||||||
|
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
|
||||||
|
in strict compliance of the JSON specification, instead of using the
|
||||||
|
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
|
||||||
|
|
||||||
|
If ``indent`` is a non-negative integer, then JSON array elements and object
|
||||||
|
members will be pretty-printed with that indent level. An indent level
|
||||||
|
of 0 will only insert newlines. ``None`` is the most compact representation.
|
||||||
|
|
||||||
|
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
|
||||||
|
then it will be used instead of the default ``(', ', ': ')`` separators.
|
||||||
|
``(',', ':')`` is the most compact JSON representation.
|
||||||
|
|
||||||
|
``encoding`` is the character encoding for str instances, default is UTF-8.
|
||||||
|
|
||||||
|
``default(obj)`` is a function that should return a serializable version
|
||||||
|
of obj or raise TypeError. The default simply raises TypeError.
|
||||||
|
|
||||||
|
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
|
||||||
|
``.default()`` method to serialize additional types), specify it with
|
||||||
|
the ``cls`` kwarg.
|
||||||
|
|
||||||
|
"""
|
||||||
|
# cached encoder
|
||||||
|
if (not skipkeys and ensure_ascii and
|
||||||
|
check_circular and allow_nan and
|
||||||
|
cls is None and indent is None and separators is None and
|
||||||
|
encoding == 'utf-8' and default is None and not kw):
|
||||||
|
iterable = _default_encoder.iterencode(obj)
|
||||||
|
else:
|
||||||
|
if cls is None:
|
||||||
|
cls = JSONEncoder
|
||||||
|
iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
|
||||||
|
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
|
||||||
|
separators=separators, encoding=encoding,
|
||||||
|
default=default, **kw).iterencode(obj)
|
||||||
|
# could accelerate with writelines in some versions of Python, at
|
||||||
|
# a debuggability cost
|
||||||
|
for chunk in iterable:
|
||||||
|
fp.write(chunk)
|
||||||
|
|
||||||
|
|
||||||
|
def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
|
||||||
|
allow_nan=True, cls=None, indent=None, separators=None,
|
||||||
|
encoding='utf-8', default=None, **kw):
|
||||||
|
"""Serialize ``obj`` to a JSON formatted ``str``.
|
||||||
|
|
||||||
|
If ``skipkeys`` is false then ``dict`` keys that are not basic types
|
||||||
|
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
|
||||||
|
will be skipped instead of raising a ``TypeError``.
|
||||||
|
|
||||||
|
If ``ensure_ascii`` is false, then the return value will be a
|
||||||
|
``unicode`` instance subject to normal Python ``str`` to ``unicode``
|
||||||
|
coercion rules instead of being escaped to an ASCII ``str``.
|
||||||
|
|
||||||
|
If ``check_circular`` is false, then the circular reference check
|
||||||
|
for container types will be skipped and a circular reference will
|
||||||
|
result in an ``OverflowError`` (or worse).
|
||||||
|
|
||||||
|
If ``allow_nan`` is false, then it will be a ``ValueError`` to
|
||||||
|
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
|
||||||
|
strict compliance of the JSON specification, instead of using the
|
||||||
|
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
|
||||||
|
|
||||||
|
If ``indent`` is a non-negative integer, then JSON array elements and
|
||||||
|
object members will be pretty-printed with that indent level. An indent
|
||||||
|
level of 0 will only insert newlines. ``None`` is the most compact
|
||||||
|
representation.
|
||||||
|
|
||||||
|
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
|
||||||
|
then it will be used instead of the default ``(', ', ': ')`` separators.
|
||||||
|
``(',', ':')`` is the most compact JSON representation.
|
||||||
|
|
||||||
|
``encoding`` is the character encoding for str instances, default is UTF-8.
|
||||||
|
|
||||||
|
``default(obj)`` is a function that should return a serializable version
|
||||||
|
of obj or raise TypeError. The default simply raises TypeError.
|
||||||
|
|
||||||
|
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
|
||||||
|
``.default()`` method to serialize additional types), specify it with
|
||||||
|
the ``cls`` kwarg.
|
||||||
|
|
||||||
|
"""
|
||||||
|
# cached encoder
|
||||||
|
if (not skipkeys and ensure_ascii and
|
||||||
|
check_circular and allow_nan and
|
||||||
|
cls is None and indent is None and separators is None and
|
||||||
|
encoding == 'utf-8' and default is None and not kw):
|
||||||
|
return _default_encoder.encode(obj)
|
||||||
|
if cls is None:
|
||||||
|
cls = JSONEncoder
|
||||||
|
return cls(
|
||||||
|
skipkeys=skipkeys, ensure_ascii=ensure_ascii,
|
||||||
|
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
|
||||||
|
separators=separators, encoding=encoding, default=default,
|
||||||
|
**kw).encode(obj)
|
||||||
|
|
||||||
|
|
||||||
|
_default_decoder = JSONDecoder(encoding=None, object_hook=None)
|
||||||
|
|
||||||
|
|
||||||
|
def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
|
||||||
|
parse_int=None, parse_constant=None, **kw):
|
||||||
|
"""Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
|
||||||
|
a JSON document) to a Python object.
|
||||||
|
|
||||||
|
If the contents of ``fp`` is encoded with an ASCII based encoding other
|
||||||
|
than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
|
||||||
|
be specified. Encodings that are not ASCII based (such as UCS-2) are
|
||||||
|
not allowed, and should be wrapped with
|
||||||
|
``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
|
||||||
|
object and passed to ``loads()``
|
||||||
|
|
||||||
|
``object_hook`` is an optional function that will be called with the
|
||||||
|
result of any object literal decode (a ``dict``). The return value of
|
||||||
|
``object_hook`` will be used instead of the ``dict``. This feature
|
||||||
|
can be used to implement custom decoders (e.g. JSON-RPC class hinting).
|
||||||
|
|
||||||
|
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
|
||||||
|
kwarg.
|
||||||
|
|
||||||
|
"""
|
||||||
|
return loads(fp.read(),
|
||||||
|
encoding=encoding, cls=cls, object_hook=object_hook,
|
||||||
|
parse_float=parse_float, parse_int=parse_int,
|
||||||
|
parse_constant=parse_constant, **kw)
|
||||||
|
|
||||||
|
|
||||||
|
def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
|
||||||
|
parse_int=None, parse_constant=None, **kw):
|
||||||
|
"""Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
|
||||||
|
document) to a Python object.
|
||||||
|
|
||||||
|
If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
|
||||||
|
other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
|
||||||
|
must be specified. Encodings that are not ASCII based (such as UCS-2)
|
||||||
|
are not allowed and should be decoded to ``unicode`` first.
|
||||||
|
|
||||||
|
``object_hook`` is an optional function that will be called with the
|
||||||
|
result of any object literal decode (a ``dict``). The return value of
|
||||||
|
``object_hook`` will be used instead of the ``dict``. This feature
|
||||||
|
can be used to implement custom decoders (e.g. JSON-RPC class hinting).
|
||||||
|
|
||||||
|
``parse_float``, if specified, will be called with the string
|
||||||
|
of every JSON float to be decoded. By default this is equivalent to
|
||||||
|
float(num_str). This can be used to use another datatype or parser
|
||||||
|
for JSON floats (e.g. decimal.Decimal).
|
||||||
|
|
||||||
|
``parse_int``, if specified, will be called with the string
|
||||||
|
of every JSON int to be decoded. By default this is equivalent to
|
||||||
|
int(num_str). This can be used to use another datatype or parser
|
||||||
|
for JSON integers (e.g. float).
|
||||||
|
|
||||||
|
``parse_constant``, if specified, will be called with one of the
|
||||||
|
following strings: -Infinity, Infinity, NaN, null, true, false.
|
||||||
|
This can be used to raise an exception if invalid JSON numbers
|
||||||
|
are encountered.
|
||||||
|
|
||||||
|
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
|
||||||
|
kwarg.
|
||||||
|
|
||||||
|
"""
|
||||||
|
if (cls is None and encoding is None and object_hook is None and
|
||||||
|
parse_int is None and parse_float is None and
|
||||||
|
parse_constant is None and not kw):
|
||||||
|
return _default_decoder.decode(s)
|
||||||
|
if cls is None:
|
||||||
|
cls = JSONDecoder
|
||||||
|
if object_hook is not None:
|
||||||
|
kw['object_hook'] = object_hook
|
||||||
|
if parse_float is not None:
|
||||||
|
kw['parse_float'] = parse_float
|
||||||
|
if parse_int is not None:
|
||||||
|
kw['parse_int'] = parse_int
|
||||||
|
if parse_constant is not None:
|
||||||
|
kw['parse_constant'] = parse_constant
|
||||||
|
return cls(encoding=encoding, **kw).decode(s)
|
@ -0,0 +1,354 @@
|
|||||||
|
"""Implementation of JSONDecoder
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import struct
|
||||||
|
|
||||||
|
from simplejson.scanner import make_scanner
|
||||||
|
try:
|
||||||
|
from simplejson._speedups import scanstring as c_scanstring
|
||||||
|
except ImportError:
|
||||||
|
c_scanstring = None
|
||||||
|
|
||||||
|
__all__ = ['JSONDecoder']
|
||||||
|
|
||||||
|
FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
|
||||||
|
|
||||||
|
def _floatconstants():
|
||||||
|
_BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
|
||||||
|
if sys.byteorder != 'big':
|
||||||
|
_BYTES = _BYTES[:8][::-1] + _BYTES[8:][::-1]
|
||||||
|
nan, inf = struct.unpack('dd', _BYTES)
|
||||||
|
return nan, inf, -inf
|
||||||
|
|
||||||
|
NaN, PosInf, NegInf = _floatconstants()
|
||||||
|
|
||||||
|
|
||||||
|
def linecol(doc, pos):
|
||||||
|
lineno = doc.count('\n', 0, pos) + 1
|
||||||
|
if lineno == 1:
|
||||||
|
colno = pos
|
||||||
|
else:
|
||||||
|
colno = pos - doc.rindex('\n', 0, pos)
|
||||||
|
return lineno, colno
|
||||||
|
|
||||||
|
|
||||||
|
def errmsg(msg, doc, pos, end=None):
|
||||||
|
# Note that this function is called from _speedups
|
||||||
|
lineno, colno = linecol(doc, pos)
|
||||||
|
if end is None:
|
||||||
|
#fmt = '{0}: line {1} column {2} (char {3})'
|
||||||
|
#return fmt.format(msg, lineno, colno, pos)
|
||||||
|
fmt = '%s: line %d column %d (char %d)'
|
||||||
|
return fmt % (msg, lineno, colno, pos)
|
||||||
|
endlineno, endcolno = linecol(doc, end)
|
||||||
|
#fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})'
|
||||||
|
#return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end)
|
||||||
|
fmt = '%s: line %d column %d - line %d column %d (char %d - %d)'
|
||||||
|
return fmt % (msg, lineno, colno, endlineno, endcolno, pos, end)
|
||||||
|
|
||||||
|
|
||||||
|
_CONSTANTS = {
|
||||||
|
'-Infinity': NegInf,
|
||||||
|
'Infinity': PosInf,
|
||||||
|
'NaN': NaN,
|
||||||
|
}
|
||||||
|
|
||||||
|
STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
|
||||||
|
BACKSLASH = {
|
||||||
|
'"': u'"', '\\': u'\\', '/': u'/',
|
||||||
|
'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
|
||||||
|
}
|
||||||
|
|
||||||
|
DEFAULT_ENCODING = "utf-8"
|
||||||
|
|
||||||
|
def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match):
|
||||||
|
"""Scan the string s for a JSON string. End is the index of the
|
||||||
|
character in s after the quote that started the JSON string.
|
||||||
|
Unescapes all valid JSON string escape sequences and raises ValueError
|
||||||
|
on attempt to decode an invalid string. If strict is False then literal
|
||||||
|
control characters are allowed in the string.
|
||||||
|
|
||||||
|
Returns a tuple of the decoded string and the index of the character in s
|
||||||
|
after the end quote."""
|
||||||
|
if encoding is None:
|
||||||
|
encoding = DEFAULT_ENCODING
|
||||||
|
chunks = []
|
||||||
|
_append = chunks.append
|
||||||
|
begin = end - 1
|
||||||
|
while 1:
|
||||||
|
chunk = _m(s, end)
|
||||||
|
if chunk is None:
|
||||||
|
raise ValueError(
|
||||||
|
errmsg("Unterminated string starting at", s, begin))
|
||||||
|
end = chunk.end()
|
||||||
|
content, terminator = chunk.groups()
|
||||||
|
# Content is contains zero or more unescaped string characters
|
||||||
|
if content:
|
||||||
|
if not isinstance(content, unicode):
|
||||||
|
content = unicode(content, encoding)
|
||||||
|
_append(content)
|
||||||
|
# Terminator is the end of string, a literal control character,
|
||||||
|
# or a backslash denoting that an escape sequence follows
|
||||||
|
if terminator == '"':
|
||||||
|
break
|
||||||
|
elif terminator != '\\':
|
||||||
|
if strict:
|
||||||
|
msg = "Invalid control character %r at" % (terminator,)
|
||||||
|
#msg = "Invalid control character {0!r} at".format(terminator)
|
||||||
|
raise ValueError(errmsg(msg, s, end))
|
||||||
|
else:
|
||||||
|
_append(terminator)
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
esc = s[end]
|
||||||
|
except IndexError:
|
||||||
|
raise ValueError(
|
||||||
|
errmsg("Unterminated string starting at", s, begin))
|
||||||
|
# If not a unicode escape sequence, must be in the lookup table
|
||||||
|
if esc != 'u':
|
||||||
|
try:
|
||||||
|
char = _b[esc]
|
||||||
|
except KeyError:
|
||||||
|
msg = "Invalid \\escape: " + repr(esc)
|
||||||
|
raise ValueError(errmsg(msg, s, end))
|
||||||
|
end += 1
|
||||||
|
else:
|
||||||
|
# Unicode escape sequence
|
||||||
|
esc = s[end + 1:end + 5]
|
||||||
|
next_end = end + 5
|
||||||
|
if len(esc) != 4:
|
||||||
|
msg = "Invalid \\uXXXX escape"
|
||||||
|
raise ValueError(errmsg(msg, s, end))
|
||||||
|
uni = int(esc, 16)
|
||||||
|
# Check for surrogate pair on UCS-4 systems
|
||||||
|
if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535:
|
||||||
|
msg = "Invalid \\uXXXX\\uXXXX surrogate pair"
|
||||||
|
if not s[end + 5:end + 7] == '\\u':
|
||||||
|
raise ValueError(errmsg(msg, s, end))
|
||||||
|
esc2 = s[end + 7:end + 11]
|
||||||
|
if len(esc2) != 4:
|
||||||
|
raise ValueError(errmsg(msg, s, end))
|
||||||
|
uni2 = int(esc2, 16)
|
||||||
|
uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
|
||||||
|
next_end += 6
|
||||||
|
char = unichr(uni)
|
||||||
|
end = next_end
|
||||||
|
# Append the unescaped character
|
||||||
|
_append(char)
|
||||||
|
return u''.join(chunks), end
|
||||||
|
|
||||||
|
|
||||||
|
# Use speedup if available
|
||||||
|
scanstring = c_scanstring or py_scanstring
|
||||||
|
|
||||||
|
WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
|
||||||
|
WHITESPACE_STR = ' \t\n\r'
|
||||||
|
|
||||||
|
def JSONObject((s, end), encoding, strict, scan_once, object_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
|
||||||
|
pairs = {}
|
||||||
|
# Use a slice to prevent IndexError from being raised, the following
|
||||||
|
# check will raise a more specific ValueError if the string is empty
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
# Normally we expect nextchar == '"'
|
||||||
|
if nextchar != '"':
|
||||||
|
if nextchar in _ws:
|
||||||
|
end = _w(s, end).end()
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
# Trivial empty object
|
||||||
|
if nextchar == '}':
|
||||||
|
return pairs, end + 1
|
||||||
|
elif nextchar != '"':
|
||||||
|
raise ValueError(errmsg("Expecting property name", s, end))
|
||||||
|
end += 1
|
||||||
|
while True:
|
||||||
|
key, end = scanstring(s, end, encoding, strict)
|
||||||
|
|
||||||
|
# To skip some function call overhead we optimize the fast paths where
|
||||||
|
# the JSON key separator is ": " or just ":".
|
||||||
|
if s[end:end + 1] != ':':
|
||||||
|
end = _w(s, end).end()
|
||||||
|
if s[end:end + 1] != ':':
|
||||||
|
raise ValueError(errmsg("Expecting : delimiter", s, end))
|
||||||
|
|
||||||
|
end += 1
|
||||||
|
|
||||||
|
try:
|
||||||
|
if s[end] in _ws:
|
||||||
|
end += 1
|
||||||
|
if s[end] in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
except IndexError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
value, end = scan_once(s, end)
|
||||||
|
except StopIteration:
|
||||||
|
raise ValueError(errmsg("Expecting object", s, end))
|
||||||
|
pairs[key] = value
|
||||||
|
|
||||||
|
try:
|
||||||
|
nextchar = s[end]
|
||||||
|
if nextchar in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
nextchar = s[end]
|
||||||
|
except IndexError:
|
||||||
|
nextchar = ''
|
||||||
|
end += 1
|
||||||
|
|
||||||
|
if nextchar == '}':
|
||||||
|
break
|
||||||
|
elif nextchar != ',':
|
||||||
|
raise ValueError(errmsg("Expecting , delimiter", s, end - 1))
|
||||||
|
|
||||||
|
try:
|
||||||
|
nextchar = s[end]
|
||||||
|
if nextchar in _ws:
|
||||||
|
end += 1
|
||||||
|
nextchar = s[end]
|
||||||
|
if nextchar in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
nextchar = s[end]
|
||||||
|
except IndexError:
|
||||||
|
nextchar = ''
|
||||||
|
|
||||||
|
end += 1
|
||||||
|
if nextchar != '"':
|
||||||
|
raise ValueError(errmsg("Expecting property name", s, end - 1))
|
||||||
|
|
||||||
|
if object_hook is not None:
|
||||||
|
pairs = object_hook(pairs)
|
||||||
|
return pairs, end
|
||||||
|
|
||||||
|
def JSONArray((s, end), scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
|
||||||
|
values = []
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
if nextchar in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
# Look-ahead for trivial empty array
|
||||||
|
if nextchar == ']':
|
||||||
|
return values, end + 1
|
||||||
|
_append = values.append
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
value, end = scan_once(s, end)
|
||||||
|
except StopIteration:
|
||||||
|
raise ValueError(errmsg("Expecting object", s, end))
|
||||||
|
_append(value)
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
if nextchar in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
nextchar = s[end:end + 1]
|
||||||
|
end += 1
|
||||||
|
if nextchar == ']':
|
||||||
|
break
|
||||||
|
elif nextchar != ',':
|
||||||
|
raise ValueError(errmsg("Expecting , delimiter", s, end))
|
||||||
|
|
||||||
|
try:
|
||||||
|
if s[end] in _ws:
|
||||||
|
end += 1
|
||||||
|
if s[end] in _ws:
|
||||||
|
end = _w(s, end + 1).end()
|
||||||
|
except IndexError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return values, end
|
||||||
|
|
||||||
|
class JSONDecoder(object):
|
||||||
|
"""Simple JSON <http://json.org> decoder
|
||||||
|
|
||||||
|
Performs the following translations in decoding by default:
|
||||||
|
|
||||||
|
+---------------+-------------------+
|
||||||
|
| JSON | Python |
|
||||||
|
+===============+===================+
|
||||||
|
| object | dict |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| array | list |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| string | unicode |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| number (int) | int, long |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| number (real) | float |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| true | True |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| false | False |
|
||||||
|
+---------------+-------------------+
|
||||||
|
| null | None |
|
||||||
|
+---------------+-------------------+
|
||||||
|
|
||||||
|
It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
|
||||||
|
their corresponding ``float`` values, which is outside the JSON spec.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, encoding=None, object_hook=None, parse_float=None,
|
||||||
|
parse_int=None, parse_constant=None, strict=True):
|
||||||
|
"""``encoding`` determines the encoding used to interpret any ``str``
|
||||||
|
objects decoded by this instance (utf-8 by default). It has no
|
||||||
|
effect when decoding ``unicode`` objects.
|
||||||
|
|
||||||
|
Note that currently only encodings that are a superset of ASCII work,
|
||||||
|
strings of other encodings should be passed in as ``unicode``.
|
||||||
|
|
||||||
|
``object_hook``, if specified, will be called with the result
|
||||||
|
of every JSON object decoded and its return value will be used in
|
||||||
|
place of the given ``dict``. This can be used to provide custom
|
||||||
|
deserializations (e.g. to support JSON-RPC class hinting).
|
||||||
|
|
||||||
|
``parse_float``, if specified, will be called with the string
|
||||||
|
of every JSON float to be decoded. By default this is equivalent to
|
||||||
|
float(num_str). This can be used to use another datatype or parser
|
||||||
|
for JSON floats (e.g. decimal.Decimal).
|
||||||
|
|
||||||
|
``parse_int``, if specified, will be called with the string
|
||||||
|
of every JSON int to be decoded. By default this is equivalent to
|
||||||
|
int(num_str). This can be used to use another datatype or parser
|
||||||
|
for JSON integers (e.g. float).
|
||||||
|
|
||||||
|
``parse_constant``, if specified, will be called with one of the
|
||||||
|
following strings: -Infinity, Infinity, NaN.
|
||||||
|
This can be used to raise an exception if invalid JSON numbers
|
||||||
|
are encountered.
|
||||||
|
|
||||||
|
"""
|
||||||
|
self.encoding = encoding
|
||||||
|
self.object_hook = object_hook
|
||||||
|
self.parse_float = parse_float or float
|
||||||
|
self.parse_int = parse_int or int
|
||||||
|
self.parse_constant = parse_constant or _CONSTANTS.__getitem__
|
||||||
|
self.strict = strict
|
||||||
|
self.parse_object = JSONObject
|
||||||
|
self.parse_array = JSONArray
|
||||||
|
self.parse_string = scanstring
|
||||||
|
self.scan_once = make_scanner(self)
|
||||||
|
|
||||||
|
def decode(self, s, _w=WHITESPACE.match):
|
||||||
|
"""Return the Python representation of ``s`` (a ``str`` or ``unicode``
|
||||||
|
instance containing a JSON document)
|
||||||
|
|
||||||
|
"""
|
||||||
|
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
|
||||||
|
end = _w(s, end).end()
|
||||||
|
if end != len(s):
|
||||||
|
raise ValueError(errmsg("Extra data", s, end, len(s)))
|
||||||
|
return obj
|
||||||
|
|
||||||
|
def raw_decode(self, s, idx=0):
|
||||||
|
"""Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning
|
||||||
|
with a JSON document) and return a 2-tuple of the Python
|
||||||
|
representation and the index in ``s`` where the document ended.
|
||||||
|
|
||||||
|
This can be used to decode a JSON document from a string that may
|
||||||
|
have extraneous data at the end.
|
||||||
|
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
obj, end = self.scan_once(s, idx)
|
||||||
|
except StopIteration:
|
||||||
|
raise ValueError("No JSON object could be decoded")
|
||||||
|
return obj, end
|
@ -0,0 +1,440 @@
|
|||||||
|
"""Implementation of JSONEncoder
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
|
||||||
|
try:
|
||||||
|
from simplejson._speedups import encode_basestring_ascii as c_encode_basestring_ascii
|
||||||
|
except ImportError:
|
||||||
|
c_encode_basestring_ascii = None
|
||||||
|
try:
|
||||||
|
from simplejson._speedups import make_encoder as c_make_encoder
|
||||||
|
except ImportError:
|
||||||
|
c_make_encoder = None
|
||||||
|
|
||||||
|
ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]')
|
||||||
|
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
|
||||||
|
HAS_UTF8 = re.compile(r'[\x80-\xff]')
|
||||||
|
ESCAPE_DCT = {
|
||||||
|
'\\': '\\\\',
|
||||||
|
'"': '\\"',
|
||||||
|
'\b': '\\b',
|
||||||
|
'\f': '\\f',
|
||||||
|
'\n': '\\n',
|
||||||
|
'\r': '\\r',
|
||||||
|
'\t': '\\t',
|
||||||
|
}
|
||||||
|
for i in range(0x20):
|
||||||
|
#ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
|
||||||
|
ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
|
||||||
|
|
||||||
|
# Assume this produces an infinity on all machines (probably not guaranteed)
|
||||||
|
INFINITY = float('1e66666')
|
||||||
|
FLOAT_REPR = repr
|
||||||
|
|
||||||
|
def encode_basestring(s):
|
||||||
|
"""Return a JSON representation of a Python string
|
||||||
|
|
||||||
|
"""
|
||||||
|
def replace(match):
|
||||||
|
return ESCAPE_DCT[match.group(0)]
|
||||||
|
return '"' + ESCAPE.sub(replace, s) + '"'
|
||||||
|
|
||||||
|
|
||||||
|
def py_encode_basestring_ascii(s):
|
||||||
|
"""Return an ASCII-only JSON representation of a Python string
|
||||||
|
|
||||||
|
"""
|
||||||
|
if isinstance(s, str) and HAS_UTF8.search(s) is not None:
|
||||||
|
s = s.decode('utf-8')
|
||||||
|
def replace(match):
|
||||||
|
s = match.group(0)
|
||||||
|
try:
|
||||||
|
return ESCAPE_DCT[s]
|
||||||
|
except KeyError:
|
||||||
|
n = ord(s)
|
||||||
|
if n < 0x10000:
|
||||||
|
#return '\\u{0:04x}'.format(n)
|
||||||
|
return '\\u%04x' % (n,)
|
||||||
|
else:
|
||||||
|
# surrogate pair
|
||||||
|
n -= 0x10000
|
||||||
|
s1 = 0xd800 | ((n >> 10) & 0x3ff)
|
||||||
|
s2 = 0xdc00 | (n & 0x3ff)
|
||||||
|
#return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
|
||||||
|
return '\\u%04x\\u%04x' % (s1, s2)
|
||||||
|
return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
|
||||||
|
|
||||||
|
|
||||||
|
encode_basestring_ascii = c_encode_basestring_ascii or py_encode_basestring_ascii
|
||||||
|
|
||||||
|
class JSONEncoder(object):
|
||||||
|
"""Extensible JSON <http://json.org> encoder for Python data structures.
|
||||||
|
|
||||||
|
Supports the following objects and types by default:
|
||||||
|
|
||||||
|
+-------------------+---------------+
|
||||||
|
| Python | JSON |
|
||||||
|
+===================+===============+
|
||||||
|
| dict | object |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| list, tuple | array |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| str, unicode | string |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| int, long, float | number |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| True | true |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| False | false |
|
||||||
|
+-------------------+---------------+
|
||||||
|
| None | null |
|
||||||
|
+-------------------+---------------+
|
||||||
|
|
||||||
|
To extend this to recognize other objects, subclass and implement a
|
||||||
|
``.default()`` method with another method that returns a serializable
|
||||||
|
object for ``o`` if possible, otherwise it should call the superclass
|
||||||
|
implementation (to raise ``TypeError``).
|
||||||
|
|
||||||
|
"""
|
||||||
|
item_separator = ', '
|
||||||
|
key_separator = ': '
|
||||||
|
def __init__(self, skipkeys=False, ensure_ascii=True,
|
||||||
|
check_circular=True, allow_nan=True, sort_keys=False,
|
||||||
|
indent=None, separators=None, encoding='utf-8', default=None):
|
||||||
|
"""Constructor for JSONEncoder, with sensible defaults.
|
||||||
|
|
||||||
|
If skipkeys is false, then it is a TypeError to attempt
|
||||||
|
encoding of keys that are not str, int, long, float or None. If
|
||||||
|
skipkeys is True, such items are simply skipped.
|
||||||
|
|
||||||
|
If ensure_ascii is true, the output is guaranteed to be str
|
||||||
|
objects with all incoming unicode characters escaped. If
|
||||||
|
ensure_ascii is false, the output will be unicode object.
|
||||||
|
|
||||||
|
If check_circular is true, then lists, dicts, and custom encoded
|
||||||
|
objects will be checked for circular references during encoding to
|
||||||
|
prevent an infinite recursion (which would cause an OverflowError).
|
||||||
|
Otherwise, no such check takes place.
|
||||||
|
|
||||||
|
If allow_nan is true, then NaN, Infinity, and -Infinity will be
|
||||||
|
encoded as such. This behavior is not JSON specification compliant,
|
||||||
|
but is consistent with most JavaScript based encoders and decoders.
|
||||||
|
Otherwise, it will be a ValueError to encode such floats.
|
||||||
|
|
||||||
|
If sort_keys is true, then the output of dictionaries will be
|
||||||
|
sorted by key; this is useful for regression tests to ensure
|
||||||
|
that JSON serializations can be compared on a day-to-day basis.
|
||||||
|
|
||||||
|
If indent is a non-negative integer, then JSON array
|
||||||
|
elements and object members will be pretty-printed with that
|
||||||
|
indent level. An indent level of 0 will only insert newlines.
|
||||||
|
None is the most compact representation.
|
||||||
|
|
||||||
|
If specified, separators should be a (item_separator, key_separator)
|
||||||
|
tuple. The default is (', ', ': '). To get the most compact JSON
|
||||||
|
representation you should specify (',', ':') to eliminate whitespace.
|
||||||
|
|
||||||
|
If specified, default is a function that gets called for objects
|
||||||
|
that can't otherwise be serialized. It should return a JSON encodable
|
||||||
|
version of the object or raise a ``TypeError``.
|
||||||
|
|
||||||
|
If encoding is not None, then all input strings will be
|
||||||
|
transformed into unicode using that encoding prior to JSON-encoding.
|
||||||
|
The default is UTF-8.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
self.skipkeys = skipkeys
|
||||||
|
self.ensure_ascii = ensure_ascii
|
||||||
|
self.check_circular = check_circular
|
||||||
|
self.allow_nan = allow_nan
|
||||||
|
self.sort_keys = sort_keys
|
||||||
|
self.indent = indent
|
||||||
|
if separators is not None:
|
||||||
|
self.item_separator, self.key_separator = separators
|
||||||
|
if default is not None:
|
||||||
|
self.default = default
|
||||||
|
self.encoding = encoding
|
||||||
|
|
||||||
|
def default(self, o):
|
||||||
|
"""Implement this method in a subclass such that it returns
|
||||||
|
a serializable object for ``o``, or calls the base implementation
|
||||||
|
(to raise a ``TypeError``).
|
||||||
|
|
||||||
|
For example, to support arbitrary iterators, you could
|
||||||
|
implement default like this::
|
||||||
|
|
||||||
|
def default(self, o):
|
||||||
|
try:
|
||||||
|
iterable = iter(o)
|
||||||
|
except TypeError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
return list(iterable)
|
||||||
|
return JSONEncoder.default(self, o)
|
||||||
|
|
||||||
|
"""
|
||||||
|
raise TypeError(repr(o) + " is not JSON serializable")
|
||||||
|
|
||||||
|
def encode(self, o):
|
||||||
|
"""Return a JSON string representation of a Python data structure.
|
||||||
|
|
||||||
|
>>> JSONEncoder().encode({"foo": ["bar", "baz"]})
|
||||||
|
'{"foo": ["bar", "baz"]}'
|
||||||
|
|
||||||
|
"""
|
||||||
|
# This is for extremely simple cases and benchmarks.
|
||||||
|
if isinstance(o, basestring):
|
||||||
|
if isinstance(o, str):
|
||||||
|
_encoding = self.encoding
|
||||||
|
if (_encoding is not None
|
||||||
|
and not (_encoding == 'utf-8')):
|
||||||
|
o = o.decode(_encoding)
|
||||||
|
if self.ensure_ascii:
|
||||||
|
return encode_basestring_ascii(o)
|
||||||
|
else:
|
||||||
|
return encode_basestring(o)
|
||||||
|
# This doesn't pass the iterator directly to ''.join() because the
|
||||||
|
# exceptions aren't as detailed. The list call should be roughly
|
||||||
|
# equivalent to the PySequence_Fast that ''.join() would do.
|
||||||
|
chunks = self.iterencode(o, _one_shot=True)
|
||||||
|
if not isinstance(chunks, (list, tuple)):
|
||||||
|
chunks = list(chunks)
|
||||||
|
return ''.join(chunks)
|
||||||
|
|
||||||
|
def iterencode(self, o, _one_shot=False):
|
||||||
|
"""Encode the given object and yield each string
|
||||||
|
representation as available.
|
||||||
|
|
||||||
|
For example::
|
||||||
|
|
||||||
|
for chunk in JSONEncoder().iterencode(bigobject):
|
||||||
|
mysocket.write(chunk)
|
||||||
|
|
||||||
|
"""
|
||||||
|
if self.check_circular:
|
||||||
|
markers = {}
|
||||||
|
else:
|
||||||
|
markers = None
|
||||||
|
if self.ensure_ascii:
|
||||||
|
_encoder = encode_basestring_ascii
|
||||||
|
else:
|
||||||
|
_encoder = encode_basestring
|
||||||
|
if self.encoding != 'utf-8':
|
||||||
|
def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):
|
||||||
|
if isinstance(o, str):
|
||||||
|
o = o.decode(_encoding)
|
||||||
|
return _orig_encoder(o)
|
||||||
|
|
||||||
|
def floatstr(o, allow_nan=self.allow_nan, _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY):
|
||||||
|
# Check for specials. Note that this type of test is processor- and/or
|
||||||
|
# platform-specific, so do tests which don't depend on the internals.
|
||||||
|
|
||||||
|
if o != o:
|
||||||
|
text = 'NaN'
|
||||||
|
elif o == _inf:
|
||||||
|
text = 'Infinity'
|
||||||
|
elif o == _neginf:
|
||||||
|
text = '-Infinity'
|
||||||
|
else:
|
||||||
|
return _repr(o)
|
||||||
|
|
||||||
|
if not allow_nan:
|
||||||
|
raise ValueError(
|
||||||
|
"Out of range float values are not JSON compliant: " +
|
||||||
|
repr(o))
|
||||||
|
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
if _one_shot and c_make_encoder is not None and not self.indent and not self.sort_keys:
|
||||||
|
_iterencode = c_make_encoder(
|
||||||
|
markers, self.default, _encoder, self.indent,
|
||||||
|
self.key_separator, self.item_separator, self.sort_keys,
|
||||||
|
self.skipkeys, self.allow_nan)
|
||||||
|
else:
|
||||||
|
_iterencode = _make_iterencode(
|
||||||
|
markers, self.default, _encoder, self.indent, floatstr,
|
||||||
|
self.key_separator, self.item_separator, self.sort_keys,
|
||||||
|
self.skipkeys, _one_shot)
|
||||||
|
return _iterencode(o, 0)
|
||||||
|
|
||||||
|
def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot,
|
||||||
|
## HACK: hand-optimized bytecode; turn globals into locals
|
||||||
|
False=False,
|
||||||
|
True=True,
|
||||||
|
ValueError=ValueError,
|
||||||
|
basestring=basestring,
|
||||||
|
dict=dict,
|
||||||
|
float=float,
|
||||||
|
id=id,
|
||||||
|
int=int,
|
||||||
|
isinstance=isinstance,
|
||||||
|
list=list,
|
||||||
|
long=long,
|
||||||
|
str=str,
|
||||||
|
tuple=tuple,
|
||||||
|
):
|
||||||
|
|
||||||
|
def _iterencode_list(lst, _current_indent_level):
|
||||||
|
if not lst:
|
||||||
|
yield '[]'
|
||||||
|
return
|
||||||
|
if markers is not None:
|
||||||
|
markerid = id(lst)
|
||||||
|
if markerid in markers:
|
||||||
|
raise ValueError("Circular reference detected")
|
||||||
|
markers[markerid] = lst
|
||||||
|
buf = '['
|
||||||
|
if _indent is not None:
|
||||||
|
_current_indent_level += 1
|
||||||
|
newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
|
||||||
|
separator = _item_separator + newline_indent
|
||||||
|
buf += newline_indent
|
||||||
|
else:
|
||||||
|
newline_indent = None
|
||||||
|
separator = _item_separator
|
||||||
|
first = True
|
||||||
|
for value in lst:
|
||||||
|
if first:
|
||||||
|
first = False
|
||||||
|
else:
|
||||||
|
buf = separator
|
||||||
|
if isinstance(value, basestring):
|
||||||
|
yield buf + _encoder(value)
|
||||||
|
elif value is None:
|
||||||
|
yield buf + 'null'
|
||||||
|
elif value is True:
|
||||||
|
yield buf + 'true'
|
||||||
|
elif value is False:
|
||||||
|
yield buf + 'false'
|
||||||
|
elif isinstance(value, (int, long)):
|
||||||
|
yield buf + str(value)
|
||||||
|
elif isinstance(value, float):
|
||||||
|
yield buf + _floatstr(value)
|
||||||
|
else:
|
||||||
|
yield buf
|
||||||
|
if isinstance(value, (list, tuple)):
|
||||||
|
chunks = _iterencode_list(value, _current_indent_level)
|
||||||
|
elif isinstance(value, dict):
|
||||||
|
chunks = _iterencode_dict(value, _current_indent_level)
|
||||||
|
else:
|
||||||
|
chunks = _iterencode(value, _current_indent_level)
|
||||||
|
for chunk in chunks:
|
||||||
|
yield chunk
|
||||||
|
if newline_indent is not None:
|
||||||
|
_current_indent_level -= 1
|
||||||
|
yield '\n' + (' ' * (_indent * _current_indent_level))
|
||||||
|
yield ']'
|
||||||
|
if markers is not None:
|
||||||
|
del markers[markerid]
|
||||||
|
|
||||||
|
def _iterencode_dict(dct, _current_indent_level):
|
||||||
|
if not dct:
|
||||||
|
yield '{}'
|
||||||
|
return
|
||||||
|
if markers is not None:
|
||||||
|
markerid = id(dct)
|
||||||
|
if markerid in markers:
|
||||||
|
raise ValueError("Circular reference detected")
|
||||||
|
markers[markerid] = dct
|
||||||
|
yield '{'
|
||||||
|
if _indent is not None:
|
||||||
|
_current_indent_level += 1
|
||||||
|
newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
|
||||||
|
item_separator = _item_separator + newline_indent
|
||||||
|
yield newline_indent
|
||||||
|
else:
|
||||||
|
newline_indent = None
|
||||||
|
item_separator = _item_separator
|
||||||
|
first = True
|
||||||
|
if _sort_keys:
|
||||||
|
items = dct.items()
|
||||||
|
items.sort(key=lambda kv: kv[0])
|
||||||
|
else:
|
||||||
|
items = dct.iteritems()
|
||||||
|
for key, value in items:
|
||||||
|
if isinstance(key, basestring):
|
||||||
|
pass
|
||||||
|
# JavaScript is weakly typed for these, so it makes sense to
|
||||||
|
# also allow them. Many encoders seem to do something like this.
|
||||||
|
elif isinstance(key, float):
|
||||||
|
key = _floatstr(key)
|
||||||
|
elif key is True:
|
||||||
|
key = 'true'
|
||||||
|
elif key is False:
|
||||||
|
key = 'false'
|
||||||
|
elif key is None:
|
||||||
|
key = 'null'
|
||||||
|
elif isinstance(key, (int, long)):
|
||||||
|
key = str(key)
|
||||||
|
elif _skipkeys:
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
raise TypeError("key " + repr(key) + " is not a string")
|
||||||
|
if first:
|
||||||
|
first = False
|
||||||
|
else:
|
||||||
|
yield item_separator
|
||||||
|
yield _encoder(key)
|
||||||
|
yield _key_separator
|
||||||
|
if isinstance(value, basestring):
|
||||||
|
yield _encoder(value)
|
||||||
|
elif value is None:
|
||||||
|
yield 'null'
|
||||||
|
elif value is True:
|
||||||
|
yield 'true'
|
||||||
|
elif value is False:
|
||||||
|
yield 'false'
|
||||||
|
elif isinstance(value, (int, long)):
|
||||||
|
yield str(value)
|
||||||
|
elif isinstance(value, float):
|
||||||
|
yield _floatstr(value)
|
||||||
|
else:
|
||||||
|
if isinstance(value, (list, tuple)):
|
||||||
|
chunks = _iterencode_list(value, _current_indent_level)
|
||||||
|
elif isinstance(value, dict):
|
||||||
|
chunks = _iterencode_dict(value, _current_indent_level)
|
||||||
|
else:
|
||||||
|
chunks = _iterencode(value, _current_indent_level)
|
||||||
|
for chunk in chunks:
|
||||||
|
yield chunk
|
||||||
|
if newline_indent is not None:
|
||||||
|
_current_indent_level -= 1
|
||||||
|
yield '\n' + (' ' * (_indent * _current_indent_level))
|
||||||
|
yield '}'
|
||||||
|
if markers is not None:
|
||||||
|
del markers[markerid]
|
||||||
|
|
||||||
|
def _iterencode(o, _current_indent_level):
|
||||||
|
if isinstance(o, basestring):
|
||||||
|
yield _encoder(o)
|
||||||
|
elif o is None:
|
||||||
|
yield 'null'
|
||||||
|
elif o is True:
|
||||||
|
yield 'true'
|
||||||
|
elif o is False:
|
||||||
|
yield 'false'
|
||||||
|
elif isinstance(o, (int, long)):
|
||||||
|
yield str(o)
|
||||||
|
elif isinstance(o, float):
|
||||||
|
yield _floatstr(o)
|
||||||
|
elif isinstance(o, (list, tuple)):
|
||||||
|
for chunk in _iterencode_list(o, _current_indent_level):
|
||||||
|
yield chunk
|
||||||
|
elif isinstance(o, dict):
|
||||||
|
for chunk in _iterencode_dict(o, _current_indent_level):
|
||||||
|
yield chunk
|
||||||
|
else:
|
||||||
|
if markers is not None:
|
||||||
|
markerid = id(o)
|
||||||
|
if markerid in markers:
|
||||||
|
raise ValueError("Circular reference detected")
|
||||||
|
markers[markerid] = o
|
||||||
|
o = _default(o)
|
||||||
|
for chunk in _iterencode(o, _current_indent_level):
|
||||||
|
yield chunk
|
||||||
|
if markers is not None:
|
||||||
|
del markers[markerid]
|
||||||
|
|
||||||
|
return _iterencode
|
@ -0,0 +1,65 @@
|
|||||||
|
"""JSON token scanner
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
try:
|
||||||
|
from simplejson._speedups import make_scanner as c_make_scanner
|
||||||
|
except ImportError:
|
||||||
|
c_make_scanner = None
|
||||||
|
|
||||||
|
__all__ = ['make_scanner']
|
||||||
|
|
||||||
|
NUMBER_RE = re.compile(
|
||||||
|
r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?',
|
||||||
|
(re.VERBOSE | re.MULTILINE | re.DOTALL))
|
||||||
|
|
||||||
|
def py_make_scanner(context):
|
||||||
|
parse_object = context.parse_object
|
||||||
|
parse_array = context.parse_array
|
||||||
|
parse_string = context.parse_string
|
||||||
|
match_number = NUMBER_RE.match
|
||||||
|
encoding = context.encoding
|
||||||
|
strict = context.strict
|
||||||
|
parse_float = context.parse_float
|
||||||
|
parse_int = context.parse_int
|
||||||
|
parse_constant = context.parse_constant
|
||||||
|
object_hook = context.object_hook
|
||||||
|
|
||||||
|
def _scan_once(string, idx):
|
||||||
|
try:
|
||||||
|
nextchar = string[idx]
|
||||||
|
except IndexError:
|
||||||
|
raise StopIteration
|
||||||
|
|
||||||
|
if nextchar == '"':
|
||||||
|
return parse_string(string, idx + 1, encoding, strict)
|
||||||
|
elif nextchar == '{':
|
||||||
|
return parse_object((string, idx + 1), encoding, strict, _scan_once, object_hook)
|
||||||
|
elif nextchar == '[':
|
||||||
|
return parse_array((string, idx + 1), _scan_once)
|
||||||
|
elif nextchar == 'n' and string[idx:idx + 4] == 'null':
|
||||||
|
return None, idx + 4
|
||||||
|
elif nextchar == 't' and string[idx:idx + 4] == 'true':
|
||||||
|
return True, idx + 4
|
||||||
|
elif nextchar == 'f' and string[idx:idx + 5] == 'false':
|
||||||
|
return False, idx + 5
|
||||||
|
|
||||||
|
m = match_number(string, idx)
|
||||||
|
if m is not None:
|
||||||
|
integer, frac, exp = m.groups()
|
||||||
|
if frac or exp:
|
||||||
|
res = parse_float(integer + (frac or '') + (exp or ''))
|
||||||
|
else:
|
||||||
|
res = parse_int(integer)
|
||||||
|
return res, m.end()
|
||||||
|
elif nextchar == 'N' and string[idx:idx + 3] == 'NaN':
|
||||||
|
return parse_constant('NaN'), idx + 3
|
||||||
|
elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity':
|
||||||
|
return parse_constant('Infinity'), idx + 8
|
||||||
|
elif nextchar == '-' and string[idx:idx + 9] == '-Infinity':
|
||||||
|
return parse_constant('-Infinity'), idx + 9
|
||||||
|
else:
|
||||||
|
raise StopIteration
|
||||||
|
|
||||||
|
return _scan_once
|
||||||
|
|
||||||
|
make_scanner = c_make_scanner or py_make_scanner
|
@ -0,0 +1,54 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
"""
|
||||||
|
Fetch the connection configuration stack that would be used to connect to a
|
||||||
|
target, without actually connecting to it.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import ansible_mitogen.connection
|
||||||
|
|
||||||
|
from ansible.plugins.action import ActionBase
|
||||||
|
|
||||||
|
|
||||||
|
class ActionModule(ActionBase):
|
||||||
|
def run(self, tmp=None, task_vars=None):
|
||||||
|
if not isinstance(self._connection,
|
||||||
|
ansible_mitogen.connection.Connection):
|
||||||
|
return {
|
||||||
|
'skipped': True,
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
'changed': True,
|
||||||
|
'result': self._connection._build_stack(),
|
||||||
|
'_ansible_verbose_always': True,
|
||||||
|
}
|
@ -0,0 +1,67 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
import os.path
|
||||||
|
import sys
|
||||||
|
|
||||||
|
#
|
||||||
|
# This is not the real Strategy implementation module, it simply exists as a
|
||||||
|
# proxy to the real module, which is loaded using Python's regular import
|
||||||
|
# mechanism, to prevent Ansible's PluginLoader from making up a fake name that
|
||||||
|
# results in ansible_mitogen plugin modules being loaded twice: once by
|
||||||
|
# PluginLoader with a name like "ansible.plugins.strategy.mitogen", which is
|
||||||
|
# stuffed into sys.modules even though attempting to import it will trigger an
|
||||||
|
# ImportError, and once under its canonical name, "ansible_mitogen.strategy".
|
||||||
|
#
|
||||||
|
# Therefore we have a proxy module that imports it under the real name, and
|
||||||
|
# sets up the duff PluginLoader-imported module to just contain objects from
|
||||||
|
# the real module, so duplicate types don't exist in memory, and things like
|
||||||
|
# debuggers and isinstance() work predictably.
|
||||||
|
#
|
||||||
|
|
||||||
|
BASE_DIR = os.path.abspath(
|
||||||
|
os.path.join(os.path.dirname(__file__), '../../..')
|
||||||
|
)
|
||||||
|
|
||||||
|
if BASE_DIR not in sys.path:
|
||||||
|
sys.path.insert(0, BASE_DIR)
|
||||||
|
|
||||||
|
import ansible_mitogen.loaders
|
||||||
|
import ansible_mitogen.strategy
|
||||||
|
|
||||||
|
|
||||||
|
Base = ansible_mitogen.loaders.strategy_loader.get('host_pinned', class_only=True)
|
||||||
|
|
||||||
|
if Base is None:
|
||||||
|
raise ImportError(
|
||||||
|
'The host_pinned strategy is only available in Ansible 2.7 or newer.'
|
||||||
|
)
|
||||||
|
|
||||||
|
class StrategyModule(ansible_mitogen.strategy.StrategyMixin, Base):
|
||||||
|
pass
|
@ -0,0 +1,593 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
"""
|
||||||
|
Mitogen extends Ansible's target configuration mechanism in several ways that
|
||||||
|
require some care:
|
||||||
|
|
||||||
|
* Per-task configurables in Ansible like ansible_python_interpreter are
|
||||||
|
connection-layer configurables in Mitogen. They must be extracted during each
|
||||||
|
task execution to form the complete connection-layer configuration.
|
||||||
|
|
||||||
|
* Mitogen has extra configurables not supported by Ansible at all, such as
|
||||||
|
mitogen_ssh_debug_level. These are extracted the same way as
|
||||||
|
ansible_python_interpreter.
|
||||||
|
|
||||||
|
* Mitogen allows connections to be delegated to other machines. Ansible has no
|
||||||
|
internal framework for this, and so Mitogen must figure out a delegated
|
||||||
|
connection configuration all on its own. It cannot reuse much of the Ansible
|
||||||
|
machinery for building a connection configuration, as that machinery is
|
||||||
|
deeply spread out and hard-wired to expect Ansible's usual mode of operation.
|
||||||
|
|
||||||
|
For normal and delegate_to connections, Ansible's PlayContext is reused where
|
||||||
|
possible to maximize compatibility, but for proxy hops, configurations are
|
||||||
|
built up using the HostVars magic class to call VariableManager.get_vars()
|
||||||
|
behind the scenes on our behalf. Where Ansible has multiple sources of a
|
||||||
|
configuration item, for example, ansible_ssh_extra_args, Mitogen must (ideally
|
||||||
|
perfectly) reproduce how Ansible arrives at its value, without using mechanisms
|
||||||
|
that are hard-wired or change across Ansible versions.
|
||||||
|
|
||||||
|
That is what this file is for. It exports two spec classes, one that takes all
|
||||||
|
information from PlayContext, and another that takes (almost) all information
|
||||||
|
from HostVars.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import abc
|
||||||
|
import os
|
||||||
|
import ansible.utils.shlex
|
||||||
|
import ansible.constants as C
|
||||||
|
|
||||||
|
from ansible.module_utils.six import with_metaclass
|
||||||
|
|
||||||
|
|
||||||
|
import mitogen.core
|
||||||
|
|
||||||
|
|
||||||
|
def parse_python_path(s):
|
||||||
|
"""
|
||||||
|
Given the string set for ansible_python_interpeter, parse it using shell
|
||||||
|
syntax and return an appropriate argument vector.
|
||||||
|
"""
|
||||||
|
if s:
|
||||||
|
return ansible.utils.shlex.shlex_split(s)
|
||||||
|
|
||||||
|
|
||||||
|
def optional_secret(value):
|
||||||
|
"""
|
||||||
|
Wrap `value` in :class:`mitogen.core.Secret` if it is not :data:`None`,
|
||||||
|
otherwise return :data:`None`.
|
||||||
|
"""
|
||||||
|
if value is not None:
|
||||||
|
return mitogen.core.Secret(value)
|
||||||
|
|
||||||
|
|
||||||
|
def first_true(it, default=None):
|
||||||
|
"""
|
||||||
|
Return the first truthy element from `it`.
|
||||||
|
"""
|
||||||
|
for elem in it:
|
||||||
|
if elem:
|
||||||
|
return elem
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
class Spec(with_metaclass(abc.ABCMeta, object)):
|
||||||
|
"""
|
||||||
|
A source for variables that comprise a connection configuration.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def transport(self):
|
||||||
|
"""
|
||||||
|
The name of the Ansible plug-in implementing the connection.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def inventory_name(self):
|
||||||
|
"""
|
||||||
|
The name of the target being connected to as it appears in Ansible's
|
||||||
|
inventory.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def remote_addr(self):
|
||||||
|
"""
|
||||||
|
The network address of the target, or for container and other special
|
||||||
|
targets, some other unique identifier.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def remote_user(self):
|
||||||
|
"""
|
||||||
|
The username of the login account on the target.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def password(self):
|
||||||
|
"""
|
||||||
|
The password of the login account on the target.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def become(self):
|
||||||
|
"""
|
||||||
|
:data:`True` if privilege escalation should be active.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def become_method(self):
|
||||||
|
"""
|
||||||
|
The name of the Ansible become method to use.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def become_user(self):
|
||||||
|
"""
|
||||||
|
The username of the target account for become.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def become_pass(self):
|
||||||
|
"""
|
||||||
|
The password of the target account for become.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def port(self):
|
||||||
|
"""
|
||||||
|
The port of the login service on the target machine.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def python_path(self):
|
||||||
|
"""
|
||||||
|
Path to the Python interpreter on the target machine.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def private_key_file(self):
|
||||||
|
"""
|
||||||
|
Path to the SSH private key file to use to login.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def ssh_executable(self):
|
||||||
|
"""
|
||||||
|
Path to the SSH executable.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def timeout(self):
|
||||||
|
"""
|
||||||
|
The generic timeout for all connections.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def ansible_ssh_timeout(self):
|
||||||
|
"""
|
||||||
|
The SSH-specific timeout for a connection.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def ssh_args(self):
|
||||||
|
"""
|
||||||
|
The list of additional arguments that should be included in an SSH
|
||||||
|
invocation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def become_exe(self):
|
||||||
|
"""
|
||||||
|
The path to the executable implementing the become method on the remote
|
||||||
|
machine.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def sudo_args(self):
|
||||||
|
"""
|
||||||
|
The list of additional arguments that should be included in a become
|
||||||
|
invocation.
|
||||||
|
"""
|
||||||
|
# TODO: split out into sudo_args/become_args.
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_via(self):
|
||||||
|
"""
|
||||||
|
The value of the mitogen_via= variable for this connection. Indicates
|
||||||
|
the connection should be established via an intermediary.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_kind(self):
|
||||||
|
"""
|
||||||
|
The type of container to use with the "setns" transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_docker_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "docker" program for the 'docker' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_kubectl_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "kubectl" program for the 'docker' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_lxc_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "lxc" program for the 'lxd' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_lxc_attach_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "lxc-attach" program for the 'lxc' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_lxc_info_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "lxc-info" program for the 'lxc' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_machinectl_path(self):
|
||||||
|
"""
|
||||||
|
The path to the "machinectl" program for the 'setns' transport.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_ssh_debug_level(self):
|
||||||
|
"""
|
||||||
|
The SSH debug level.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def mitogen_ssh_compression(self):
|
||||||
|
"""
|
||||||
|
Whether SSH compression is enabled.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def extra_args(self):
|
||||||
|
"""
|
||||||
|
Connection-specific arguments.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class PlayContextSpec(Spec):
|
||||||
|
"""
|
||||||
|
PlayContextSpec takes almost all its information as-is from Ansible's
|
||||||
|
PlayContext. It is used for normal connections and delegate_to connections,
|
||||||
|
and should always be accurate.
|
||||||
|
"""
|
||||||
|
def __init__(self, connection, play_context, transport, inventory_name):
|
||||||
|
self._connection = connection
|
||||||
|
self._play_context = play_context
|
||||||
|
self._transport = transport
|
||||||
|
self._inventory_name = inventory_name
|
||||||
|
|
||||||
|
def transport(self):
|
||||||
|
return self._transport
|
||||||
|
|
||||||
|
def inventory_name(self):
|
||||||
|
return self._inventory_name
|
||||||
|
|
||||||
|
def remote_addr(self):
|
||||||
|
return self._play_context.remote_addr
|
||||||
|
|
||||||
|
def remote_user(self):
|
||||||
|
return self._play_context.remote_user
|
||||||
|
|
||||||
|
def become(self):
|
||||||
|
return self._play_context.become
|
||||||
|
|
||||||
|
def become_method(self):
|
||||||
|
return self._play_context.become_method
|
||||||
|
|
||||||
|
def become_user(self):
|
||||||
|
return self._play_context.become_user
|
||||||
|
|
||||||
|
def become_pass(self):
|
||||||
|
return optional_secret(self._play_context.become_pass)
|
||||||
|
|
||||||
|
def password(self):
|
||||||
|
return optional_secret(self._play_context.password)
|
||||||
|
|
||||||
|
def port(self):
|
||||||
|
return self._play_context.port
|
||||||
|
|
||||||
|
def python_path(self):
|
||||||
|
return parse_python_path(
|
||||||
|
self._connection.get_task_var('ansible_python_interpreter')
|
||||||
|
)
|
||||||
|
|
||||||
|
def private_key_file(self):
|
||||||
|
return self._play_context.private_key_file
|
||||||
|
|
||||||
|
def ssh_executable(self):
|
||||||
|
return self._play_context.ssh_executable
|
||||||
|
|
||||||
|
def timeout(self):
|
||||||
|
return self._play_context.timeout
|
||||||
|
|
||||||
|
def ansible_ssh_timeout(self):
|
||||||
|
return (
|
||||||
|
self._connection.get_task_var('ansible_timeout') or
|
||||||
|
self._connection.get_task_var('ansible_ssh_timeout') or
|
||||||
|
self.timeout()
|
||||||
|
)
|
||||||
|
|
||||||
|
def ssh_args(self):
|
||||||
|
return [
|
||||||
|
mitogen.core.to_text(term)
|
||||||
|
for s in (
|
||||||
|
getattr(self._play_context, 'ssh_args', ''),
|
||||||
|
getattr(self._play_context, 'ssh_common_args', ''),
|
||||||
|
getattr(self._play_context, 'ssh_extra_args', '')
|
||||||
|
)
|
||||||
|
for term in ansible.utils.shlex.shlex_split(s or '')
|
||||||
|
]
|
||||||
|
|
||||||
|
def become_exe(self):
|
||||||
|
return self._play_context.become_exe
|
||||||
|
|
||||||
|
def sudo_args(self):
|
||||||
|
return [
|
||||||
|
mitogen.core.to_text(term)
|
||||||
|
for term in ansible.utils.shlex.shlex_split(
|
||||||
|
first_true((
|
||||||
|
self._play_context.become_flags,
|
||||||
|
self._play_context.sudo_flags,
|
||||||
|
# Ansible 2.3.
|
||||||
|
getattr(C, 'DEFAULT_BECOME_FLAGS', ''),
|
||||||
|
getattr(C, 'DEFAULT_SUDO_FLAGS', '')
|
||||||
|
), default='')
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
def mitogen_via(self):
|
||||||
|
return self._connection.get_task_var('mitogen_via')
|
||||||
|
|
||||||
|
def mitogen_kind(self):
|
||||||
|
return self._connection.get_task_var('mitogen_kind')
|
||||||
|
|
||||||
|
def mitogen_docker_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_docker_path')
|
||||||
|
|
||||||
|
def mitogen_kubectl_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_kubectl_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_lxc_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_attach_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_lxc_attach_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_info_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_lxc_info_path')
|
||||||
|
|
||||||
|
def mitogen_machinectl_path(self):
|
||||||
|
return self._connection.get_task_var('mitogen_machinectl_path')
|
||||||
|
|
||||||
|
def mitogen_ssh_debug_level(self):
|
||||||
|
return self._connection.get_task_var('mitogen_ssh_debug_level')
|
||||||
|
|
||||||
|
def mitogen_ssh_compression(self):
|
||||||
|
return self._connection.get_task_var('mitogen_ssh_compression')
|
||||||
|
|
||||||
|
def extra_args(self):
|
||||||
|
return self._connection.get_extra_args()
|
||||||
|
|
||||||
|
|
||||||
|
class MitogenViaSpec(Spec):
|
||||||
|
"""
|
||||||
|
MitogenViaSpec takes most of its information from the HostVars of the
|
||||||
|
running task. HostVars is a lightweight wrapper around VariableManager, so
|
||||||
|
it is better to say that VariableManager.get_vars() is the ultimate source
|
||||||
|
of MitogenViaSpec's information.
|
||||||
|
|
||||||
|
Due to this, mitogen_via= hosts must have all their configuration
|
||||||
|
information represented as host and group variables. We cannot use any
|
||||||
|
per-task configuration, as all that data belongs to the real target host.
|
||||||
|
|
||||||
|
Ansible uses all kinds of strange historical logic for calculating
|
||||||
|
variables, including making their precedence configurable. MitogenViaSpec
|
||||||
|
must ultimately reimplement all of that logic. It is likely that if you are
|
||||||
|
having a configruation problem with connection delegation, the answer to
|
||||||
|
your problem lies in the method implementations below!
|
||||||
|
"""
|
||||||
|
def __init__(self, inventory_name, host_vars,
|
||||||
|
become_method, become_user):
|
||||||
|
self._inventory_name = inventory_name
|
||||||
|
self._host_vars = host_vars
|
||||||
|
self._become_method = become_method
|
||||||
|
self._become_user = become_user
|
||||||
|
|
||||||
|
def transport(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_connection') or
|
||||||
|
C.DEFAULT_TRANSPORT
|
||||||
|
)
|
||||||
|
|
||||||
|
def inventory_name(self):
|
||||||
|
return self._inventory_name
|
||||||
|
|
||||||
|
def remote_addr(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_host') or
|
||||||
|
self._inventory_name
|
||||||
|
)
|
||||||
|
|
||||||
|
def remote_user(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_user') or
|
||||||
|
self._host_vars.get('ansible_ssh_user') or
|
||||||
|
C.DEFAULT_REMOTE_USER
|
||||||
|
)
|
||||||
|
|
||||||
|
def become(self):
|
||||||
|
return bool(self._become_user)
|
||||||
|
|
||||||
|
def become_method(self):
|
||||||
|
return self._become_method or C.DEFAULT_BECOME_METHOD
|
||||||
|
|
||||||
|
def become_user(self):
|
||||||
|
return self._become_user
|
||||||
|
|
||||||
|
def become_pass(self):
|
||||||
|
return optional_secret(
|
||||||
|
# TODO: Might have to come from PlayContext.
|
||||||
|
self._host_vars.get('ansible_become_password') or
|
||||||
|
self._host_vars.get('ansible_become_pass')
|
||||||
|
)
|
||||||
|
|
||||||
|
def password(self):
|
||||||
|
return optional_secret(
|
||||||
|
# TODO: Might have to come from PlayContext.
|
||||||
|
self._host_vars.get('ansible_ssh_pass') or
|
||||||
|
self._host_vars.get('ansible_password')
|
||||||
|
)
|
||||||
|
|
||||||
|
def port(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_port') or
|
||||||
|
C.DEFAULT_REMOTE_PORT
|
||||||
|
)
|
||||||
|
|
||||||
|
def python_path(self):
|
||||||
|
return parse_python_path(
|
||||||
|
self._host_vars.get('ansible_python_interpreter')
|
||||||
|
# This variable has no default for remote hosts. For local hosts it
|
||||||
|
# is sys.executable.
|
||||||
|
)
|
||||||
|
|
||||||
|
def private_key_file(self):
|
||||||
|
# TODO: must come from PlayContext too.
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_ssh_private_key_file') or
|
||||||
|
self._host_vars.get('ansible_private_key_file') or
|
||||||
|
C.DEFAULT_PRIVATE_KEY_FILE
|
||||||
|
)
|
||||||
|
|
||||||
|
def ssh_executable(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_ssh_executable') or
|
||||||
|
C.ANSIBLE_SSH_EXECUTABLE
|
||||||
|
)
|
||||||
|
|
||||||
|
def timeout(self):
|
||||||
|
# TODO: must come from PlayContext too.
|
||||||
|
return C.DEFAULT_TIMEOUT
|
||||||
|
|
||||||
|
def ansible_ssh_timeout(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_timeout') or
|
||||||
|
self._host_vars.get('ansible_ssh_timeout') or
|
||||||
|
self.timeout()
|
||||||
|
)
|
||||||
|
|
||||||
|
def ssh_args(self):
|
||||||
|
return [
|
||||||
|
mitogen.core.to_text(term)
|
||||||
|
for s in (
|
||||||
|
(
|
||||||
|
self._host_vars.get('ansible_ssh_args') or
|
||||||
|
getattr(C, 'ANSIBLE_SSH_ARGS', None) or
|
||||||
|
os.environ.get('ANSIBLE_SSH_ARGS')
|
||||||
|
# TODO: ini entry. older versions.
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self._host_vars.get('ansible_ssh_common_args') or
|
||||||
|
os.environ.get('ANSIBLE_SSH_COMMON_ARGS')
|
||||||
|
# TODO: ini entry.
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self._host_vars.get('ansible_ssh_extra_args') or
|
||||||
|
os.environ.get('ANSIBLE_SSH_EXTRA_ARGS')
|
||||||
|
# TODO: ini entry.
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for term in ansible.utils.shlex.shlex_split(s)
|
||||||
|
if s
|
||||||
|
]
|
||||||
|
|
||||||
|
def become_exe(self):
|
||||||
|
return (
|
||||||
|
self._host_vars.get('ansible_become_exe') or
|
||||||
|
C.DEFAULT_BECOME_EXE
|
||||||
|
)
|
||||||
|
|
||||||
|
def sudo_args(self):
|
||||||
|
return [
|
||||||
|
mitogen.core.to_text(term)
|
||||||
|
for s in (
|
||||||
|
self._host_vars.get('ansible_sudo_flags') or '',
|
||||||
|
self._host_vars.get('ansible_become_flags') or '',
|
||||||
|
)
|
||||||
|
for term in ansible.utils.shlex.shlex_split(s)
|
||||||
|
]
|
||||||
|
|
||||||
|
def mitogen_via(self):
|
||||||
|
return self._host_vars.get('mitogen_via')
|
||||||
|
|
||||||
|
def mitogen_kind(self):
|
||||||
|
return self._host_vars.get('mitogen_kind')
|
||||||
|
|
||||||
|
def mitogen_docker_path(self):
|
||||||
|
return self._host_vars.get('mitogen_docker_path')
|
||||||
|
|
||||||
|
def mitogen_kubectl_path(self):
|
||||||
|
return self._host_vars.get('mitogen_kubectl_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_path(self):
|
||||||
|
return self.host_vars.get('mitogen_lxc_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_attach_path(self):
|
||||||
|
return self._host_vars.get('mitogen_lxc_attach_path')
|
||||||
|
|
||||||
|
def mitogen_lxc_info_path(self):
|
||||||
|
return self._host_vars.get('mitogen_lxc_info_path')
|
||||||
|
|
||||||
|
def mitogen_machinectl_path(self):
|
||||||
|
return self._host_vars.get('mitogen_machinectl_path')
|
||||||
|
|
||||||
|
def mitogen_ssh_debug_level(self):
|
||||||
|
return self._host_vars.get('mitogen_ssh_debug_level')
|
||||||
|
|
||||||
|
def mitogen_ssh_compression(self):
|
||||||
|
return self._host_vars.get('mitogen_ssh_compression')
|
||||||
|
|
||||||
|
def extra_args(self):
|
||||||
|
return [] # TODO
|
@ -1,17 +1,11 @@
|
|||||||
-r docs/docs-requirements.txt
|
# This file is no longer used by CI jobs, it's mostly for interactive use.
|
||||||
ansible==2.6.1
|
# Instead CI jobs grab the relevant sub-requirement.
|
||||||
coverage==4.5.1
|
|
||||||
Django==1.6.11 # Last version supporting 2.6.
|
# mitogen_tests
|
||||||
mock==2.0.0
|
-r tests/requirements.txt
|
||||||
pytz==2018.5
|
|
||||||
paramiko==2.3.2 # Last 2.6-compat version.
|
# ansible_tests
|
||||||
pytest-catchlog==1.2.2
|
-r tests/ansible/requirements.txt
|
||||||
pytest==3.1.2
|
|
||||||
PyYAML==3.11; python_version < '2.7'
|
# readthedocs
|
||||||
PyYAML==3.12; python_version >= '2.7'
|
-r docs/requirements.txt
|
||||||
timeoutcontext==1.2.0
|
|
||||||
unittest2==1.1.0
|
|
||||||
# Fix InsecurePlatformWarning while creating py26 tox environment
|
|
||||||
# https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
|
|
||||||
urllib3[secure]; python_version < '2.7.9'
|
|
||||||
google-api-python-client==1.6.5
|
|
||||||
|
Before Width: | Height: | Size: 8.0 KiB After Width: | Height: | Size: 8.0 KiB |
Before Width: | Height: | Size: 60 KiB |
@ -0,0 +1,3 @@
|
|||||||
|
**pcap** filter=lfs diff=lfs merge=lfs -text
|
||||||
|
run_hostname_100_times_mito.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
run_hostname_100_times_vanilla.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 36 KiB |
@ -0,0 +1,16 @@
|
|||||||
|
|
||||||
|
import sys
|
||||||
|
# Add viewBox attr to SVGs lacking it, so IE scales properly.
|
||||||
|
|
||||||
|
import lxml.etree
|
||||||
|
import glob
|
||||||
|
|
||||||
|
|
||||||
|
for name in sys.argv[1:]: # glob.glob('*/*.svg'): #+ glob.glob('images/ansible/*.svg'):
|
||||||
|
doc = lxml.etree.parse(open(name))
|
||||||
|
svg = doc.getroot()
|
||||||
|
for elem in svg.cssselect('[stroke-width]'):
|
||||||
|
if elem.attrib['stroke-width'] < '2':
|
||||||
|
elem.attrib['stroke-width'] = '2'
|
||||||
|
|
||||||
|
open(name, 'w').write(lxml.etree.tostring(svg, xml_declaration=True, encoding='UTF-8'))
|
@ -0,0 +1,13 @@
|
|||||||
|
|
||||||
|
# Add viewBox attr to SVGs lacking it, so IE scales properly.
|
||||||
|
|
||||||
|
import lxml.etree
|
||||||
|
import glob
|
||||||
|
|
||||||
|
|
||||||
|
for name in glob.glob('images/*.svg') + glob.glob('images/ansible/*.svg'):
|
||||||
|
doc = lxml.etree.parse(open(name))
|
||||||
|
svg = doc.getroot()
|
||||||
|
if 'viewBox' not in svg.attrib:
|
||||||
|
svg.attrib['viewBox'] = '0 0 %(width)s %(height)s' % svg.attrib
|
||||||
|
open(name, 'w').write(lxml.etree.tostring(svg, xml_declaration=True, encoding='UTF-8'))
|
@ -1,4 +1,3 @@
|
|||||||
Sphinx==1.7.1
|
Sphinx==1.7.1
|
||||||
sphinx-autobuild==0.6.0 # Last version to support Python 2.6
|
|
||||||
sphinxcontrib-programoutput==0.11
|
sphinxcontrib-programoutput==0.11
|
||||||
alabaster==0.7.10
|
alabaster==0.7.10
|
@ -1,90 +0,0 @@
|
|||||||
|
|
||||||
Importer Wall Of Shame
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
The following modules and packages violate protocol or best practice in some way:
|
|
||||||
|
|
||||||
* They run magic during ``__init.py__`` that makes life hard for Mitogen.
|
|
||||||
Executing code during module import is always bad, and Mitogen is a concrete
|
|
||||||
benchmark for why it's bad.
|
|
||||||
|
|
||||||
* They install crap in :py:data:`sys.modules` that completely ignore or
|
|
||||||
partially implement the protocols laid out in PEP-302.
|
|
||||||
|
|
||||||
* They "vendor" a third party package, either incompletely, using hacks visible
|
|
||||||
through the runtime's standard interfaces, or with ancient versions of code
|
|
||||||
that in turn mess with :py:data:`sys.modules` in some horrible way.
|
|
||||||
|
|
||||||
Bugs will probably be filed for these in time, but it does not address the huge
|
|
||||||
installed base of existing old software versions, so hacks are needed anyway.
|
|
||||||
|
|
||||||
|
|
||||||
``pbr``
|
|
||||||
=======
|
|
||||||
|
|
||||||
It claims to use ``pkg_resources`` to read version information
|
|
||||||
(``_get_version_from_pkg_metadata()``), which would result in PEP-302 being
|
|
||||||
reused and everything just working wonderfully, but instead it actually does
|
|
||||||
direct filesystem access.
|
|
||||||
|
|
||||||
**What could it do instead?**
|
|
||||||
|
|
||||||
* ``pkg_resources.resource_stream()``
|
|
||||||
|
|
||||||
**What Mitogen is forced to do**
|
|
||||||
|
|
||||||
When it sees ``pbr`` being loaded, it smodges the process environment with a
|
|
||||||
``PBR_VERSION`` variable to override any attempt to auto-detect the version.
|
|
||||||
This will probably break code I haven't seen yet.
|
|
||||||
|
|
||||||
|
|
||||||
``pkg_resources``
|
|
||||||
=================
|
|
||||||
|
|
||||||
Anything that imports ``pkg_resources`` will eventually cause ``pkg_resources``
|
|
||||||
to try and import and scan ``__main__`` for its ``__requires__`` attribute
|
|
||||||
(``pkg_resources/__init__.py::_build_master()``). This breaks any app that is
|
|
||||||
not expecting its ``__main__`` to suddenly be sucked over a network and
|
|
||||||
injected into a remote process, like py.test.
|
|
||||||
|
|
||||||
A future version of Mitogen might have a more general hack that doesn't import
|
|
||||||
the master's ``__main__`` as ``__main__`` in the slave, avoiding all kinds of
|
|
||||||
issues like these.
|
|
||||||
|
|
||||||
**What could it do instead?**
|
|
||||||
|
|
||||||
* Explicit is better than implicit: wait until the magical behaviour is
|
|
||||||
explicitly requested (i.e. an API call).
|
|
||||||
|
|
||||||
* Use ``get("__main__")`` on :py:data:`sys.modules` rather than ``import``, but
|
|
||||||
this method isn't general enough, it only really helps tools like Mitogen.
|
|
||||||
|
|
||||||
**What Mitogen is forced to do**
|
|
||||||
|
|
||||||
Examine the stack during every attempt to import ``__main__`` and check if the
|
|
||||||
requestee module is named ``pkg_resources``, if so then refuse the import.
|
|
||||||
|
|
||||||
|
|
||||||
``six``
|
|
||||||
=======
|
|
||||||
|
|
||||||
The ``six`` module makes some effort to conform to PEP-302, but it is missing
|
|
||||||
several critical pieces, e.g. the ``__loader__`` attribute. This not only
|
|
||||||
breaks the Python standard library tooling (such as the :py:mod:`inspect`
|
|
||||||
module), but also Mitogen. Newer versions of ``six`` improve things somewhat,
|
|
||||||
but there are still outstanding issues preventing Mitogen from working with
|
|
||||||
``six``.
|
|
||||||
|
|
||||||
This package is sufficiently popular that it must eventually be supported. See
|
|
||||||
`here for an example issue`_.
|
|
||||||
|
|
||||||
.. _here for an example issue: https://github.com/dw/mitogen/issues/31
|
|
||||||
|
|
||||||
**What could it do instead?**
|
|
||||||
|
|
||||||
* Any custom hacks installed into :py:data:`sys.modules` should support the
|
|
||||||
protocols laid out in PEP-302.
|
|
||||||
|
|
||||||
**What Mitogen is forced to do**
|
|
||||||
|
|
||||||
Vendored versions of ``six`` currently don't work at all.
|
|
@ -0,0 +1,46 @@
|
|||||||
|
# Wire up a ping/pong counting loop between 2 subprocesses.
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
import mitogen.core
|
||||||
|
import mitogen.select
|
||||||
|
|
||||||
|
|
||||||
|
@mitogen.core.takes_router
|
||||||
|
def ping_pong(control_sender, router):
|
||||||
|
with mitogen.core.Receiver(router) as recv:
|
||||||
|
# Tell caller how to communicate with us.
|
||||||
|
control_sender.send(recv.to_sender())
|
||||||
|
|
||||||
|
# Wait for caller to tell us how to talk back:
|
||||||
|
data_sender = recv.get().unpickle()
|
||||||
|
|
||||||
|
n = 0
|
||||||
|
while (n + 1) < 30:
|
||||||
|
n = recv.get().unpickle()
|
||||||
|
print('the number is currently', n)
|
||||||
|
data_sender.send(n + 1)
|
||||||
|
|
||||||
|
|
||||||
|
@mitogen.main()
|
||||||
|
def main(router):
|
||||||
|
# Create a receiver for control messages.
|
||||||
|
with mitogen.core.Receiver(router) as recv:
|
||||||
|
# Start ping_pong() in child 1 and fetch its sender.
|
||||||
|
c1 = router.local()
|
||||||
|
c1_call = c1.call_async(ping_pong, recv.to_sender())
|
||||||
|
c1_sender = recv.get().unpickle()
|
||||||
|
|
||||||
|
# Start ping_pong() in child 2 and fetch its sender.
|
||||||
|
c2 = router.local()
|
||||||
|
c2_call = c2.call_async(ping_pong, recv.to_sender())
|
||||||
|
c2_sender = recv.get().unpickle()
|
||||||
|
|
||||||
|
# Tell the children about each others' senders.
|
||||||
|
c1_sender.send(c2_sender)
|
||||||
|
c2_sender.send(c1_sender)
|
||||||
|
|
||||||
|
# Start the loop.
|
||||||
|
c1_sender.send(0)
|
||||||
|
|
||||||
|
# Wait for both functions to return.
|
||||||
|
mitogen.select.Select.all([c1_call, c2_call])
|
@ -0,0 +1,103 @@
|
|||||||
|
|
||||||
|
#
|
||||||
|
# This demonstrates using a nested select.Select() to simultaneously watch for
|
||||||
|
# in-progress events generated by a bunch of function calls, and the completion
|
||||||
|
# of those function calls.
|
||||||
|
#
|
||||||
|
# We start 5 children and run a function in each of them in parallel. The
|
||||||
|
# function writes the numbers 1..5 to a Sender before returning. The master
|
||||||
|
# reads the numbers from each child as they are generated, and exits the loop
|
||||||
|
# when the last function returns.
|
||||||
|
#
|
||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
import mitogen
|
||||||
|
import mitogen.select
|
||||||
|
|
||||||
|
|
||||||
|
def count_to(sender, n, wait=0.333):
|
||||||
|
for x in range(n):
|
||||||
|
sender.send(x)
|
||||||
|
time.sleep(wait)
|
||||||
|
|
||||||
|
|
||||||
|
@mitogen.main()
|
||||||
|
def main(router):
|
||||||
|
# Start 5 subprocesses and give them made up names.
|
||||||
|
contexts = {
|
||||||
|
'host%d' % (i,): router.local()
|
||||||
|
for i in range(5)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Used later to recover hostname. A future Mitogen will provide a better
|
||||||
|
# way to get app data references back out of its IO primitives, for now you
|
||||||
|
# need to do it manually.
|
||||||
|
hostname_by_context_id = {
|
||||||
|
context.context_id: hostname
|
||||||
|
for hostname, context in contexts.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
# I am a select that holds the receivers that will receive the function
|
||||||
|
# call results. Selects are one-shot by default, which means each receiver
|
||||||
|
# is removed from them as a result arrives. Therefore it means the last
|
||||||
|
# function has completed when bool(calls_sel) is False.
|
||||||
|
calls_sel = mitogen.select.Select()
|
||||||
|
|
||||||
|
# I receive the numbers as they are counted.
|
||||||
|
status_recv = mitogen.core.Receiver(router)
|
||||||
|
|
||||||
|
# Start the function calls
|
||||||
|
for hostname, context in contexts.items():
|
||||||
|
calls_sel.add(
|
||||||
|
context.call_async(
|
||||||
|
count_to,
|
||||||
|
sender=status_recv.to_sender(),
|
||||||
|
n=5,
|
||||||
|
wait=0.333
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a select subscribed to the function call result Select, and to the
|
||||||
|
# number-counting receiver. Any message arriving on any child of this
|
||||||
|
# Select will wake it up -- be it a message arriving on the status
|
||||||
|
# receiver, or any message arriving on any of the function call result
|
||||||
|
# receivers.
|
||||||
|
|
||||||
|
# Once last call is completed, calls_sel will be empty since it's
|
||||||
|
# oneshot=True (the default), causing __bool__ to be False
|
||||||
|
both_sel = mitogen.select.Select([status_recv, calls_sel], oneshot=False)
|
||||||
|
|
||||||
|
# Internally selects store a strong reference from Receiver->Select that
|
||||||
|
# will keep the Select alive as long as the receiver is alive. If a
|
||||||
|
# receiver or select otherwise 'outlives' some parent select, attempting to
|
||||||
|
# re-add it to a new select will raise an error. In all cases it's
|
||||||
|
# desirable to call Select.close(). This can be done as a context manager.
|
||||||
|
with calls_sel, both_sel:
|
||||||
|
while calls_sel:
|
||||||
|
try:
|
||||||
|
msg = both_sel.get(timeout=60.0)
|
||||||
|
except mitogen.core.TimeoutError:
|
||||||
|
print("No update in 60 seconds, something's broke")
|
||||||
|
break
|
||||||
|
|
||||||
|
hostname = hostname_by_context_id[msg.src_id]
|
||||||
|
|
||||||
|
if msg.receiver is status_recv: # https://mitogen.readthedocs.io/en/stable/api.html#mitogen.core.Message.receiver
|
||||||
|
# handle a status update
|
||||||
|
print('Got status update from %s: %s' % (hostname, msg.unpickle()))
|
||||||
|
elif msg.receiver is calls_sel: # subselect
|
||||||
|
# handle a function call result.
|
||||||
|
try:
|
||||||
|
assert None == msg.unpickle()
|
||||||
|
print('Task succeeded on %s' % (hostname,))
|
||||||
|
except mitogen.core.CallError as e:
|
||||||
|
print('Task failed on host %s: %s' % (hostname, e))
|
||||||
|
|
||||||
|
if calls_sel:
|
||||||
|
print('Some tasks did not complete.')
|
||||||
|
else:
|
||||||
|
print('All tasks completed.')
|
@ -0,0 +1,295 @@
|
|||||||
|
|
||||||
|
#
|
||||||
|
# This program is a stand-in for good intro docs. It just documents various
|
||||||
|
# basics of using Mitogen.
|
||||||
|
#
|
||||||
|
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import spwd
|
||||||
|
|
||||||
|
import mitogen.core
|
||||||
|
import mitogen.master
|
||||||
|
import mitogen.service
|
||||||
|
import mitogen.utils
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_file_contents(path):
|
||||||
|
"""
|
||||||
|
Get the contents of a file.
|
||||||
|
"""
|
||||||
|
with open(path, 'rb') as fp:
|
||||||
|
# mitogen.core.Blob() is a bytes subclass with a repr() that returns a
|
||||||
|
# summary of the blob, rather than the raw blob data. This makes
|
||||||
|
# logging output *much* nicer. Unlike most custom types, blobs can be
|
||||||
|
# serialized.
|
||||||
|
return mitogen.core.Blob(fp.read())
|
||||||
|
|
||||||
|
|
||||||
|
def put_file_contents(path, s):
|
||||||
|
"""
|
||||||
|
Write the contents of a file.
|
||||||
|
"""
|
||||||
|
with open(path, 'wb') as fp:
|
||||||
|
fp.write(s)
|
||||||
|
|
||||||
|
|
||||||
|
def streamy_download_file(context, path):
|
||||||
|
"""
|
||||||
|
Fetch a file from the FileService hosted by `context`.
|
||||||
|
"""
|
||||||
|
bio = io.BytesIO()
|
||||||
|
|
||||||
|
# FileService.get() is not actually an exposed service method, it's just a
|
||||||
|
# classmethod that wraps up the complicated dance of implementing the
|
||||||
|
# transfer.
|
||||||
|
ok, metadata = mitogen.service.FileService.get(context, path, bio)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'success': ok,
|
||||||
|
'metadata': metadata,
|
||||||
|
'size': len(bio.getvalue()),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_password_hash(username):
|
||||||
|
"""
|
||||||
|
Fetch a user's password hash.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
h = spwd.getspnam(username)
|
||||||
|
except KeyError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# mitogen.core.Secret() is a Unicode subclass with a repr() that hides the
|
||||||
|
# secret data. This keeps secret stuff out of logs. Like blobs, secrets can
|
||||||
|
# also be serialized.
|
||||||
|
return mitogen.core.Secret(h)
|
||||||
|
|
||||||
|
|
||||||
|
def md5sum(path):
|
||||||
|
"""
|
||||||
|
Return the MD5 checksum for a file.
|
||||||
|
"""
|
||||||
|
return hashlib.md5(get_file_contents(path)).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def work_on_machine(context):
|
||||||
|
"""
|
||||||
|
Do stuff to a remote context.
|
||||||
|
"""
|
||||||
|
print("Created context. Context ID is", context.context_id)
|
||||||
|
|
||||||
|
# You don't need to understand any/all of this, but it's helpful to grok
|
||||||
|
# the whole chain:
|
||||||
|
|
||||||
|
# - Context.call() is a light wrapper around .call_async(), the wrapper
|
||||||
|
# simply blocks the caller until a reply arrives.
|
||||||
|
# - .call_async() serializes the call signature into a message and passes
|
||||||
|
# it to .send_async()
|
||||||
|
# - .send_async() creates a mitogen.core.Receiver() on the local router.
|
||||||
|
# The receiver constructor uses Router.add_handle() to allocate a
|
||||||
|
# 'reply_to' handle and install a callback function that wakes the
|
||||||
|
# receiver when a reply message arrives.
|
||||||
|
# - .send_async() puts the reply handle in Message.reply_to field and
|
||||||
|
# passes it to .send()
|
||||||
|
# - Context.send() stamps the destination context ID into the
|
||||||
|
# Message.dst_id field and passes it to Router.route()
|
||||||
|
# - Router.route() uses Broker.defer() to schedule _async_route(msg)
|
||||||
|
# on the Broker thread.
|
||||||
|
# [broker thread]
|
||||||
|
# - The broker thread wakes and calls _async_route(msg)
|
||||||
|
# - Router._async_route() notices 'dst_id' is for a remote context and
|
||||||
|
# looks up the stream on which messages for dst_id should be sent (may be
|
||||||
|
# direct connection or not), and calls Stream.send()
|
||||||
|
# - Stream.send() packs the message into a bytestring, appends it to
|
||||||
|
# Stream._output_buf, and calls Broker.start_transmit()
|
||||||
|
# - Broker finishes work, reenters IO loop. IO loop wakes due to writeable
|
||||||
|
# stream.
|
||||||
|
# - Stream.on_transmit() writes the full/partial buffer to SSH, calls
|
||||||
|
# stop_transmit() to mark the stream unwriteable once _output_buf is
|
||||||
|
# empty.
|
||||||
|
# - Broker IO loop sleeps, no readers/writers.
|
||||||
|
# - Broker wakes due to SSH stream readable.
|
||||||
|
# - Stream.on_receive() called, reads the reply message, converts it to a
|
||||||
|
# Message and passes it to Router._async_route().
|
||||||
|
# - Router._async_route() notices message is for local context, looks up
|
||||||
|
# target handle in the .add_handle() registry.
|
||||||
|
# - Receiver._on_receive() called, appends message to receiver queue.
|
||||||
|
# [main thread]
|
||||||
|
# - Receiver.get() used to block the original Context.call() wakes and pops
|
||||||
|
# the message from the queue.
|
||||||
|
# - Message data (pickled return value) is deserialized and returned to the
|
||||||
|
# caller.
|
||||||
|
print("It's running on the local machine. Its PID is",
|
||||||
|
context.call(os.getpid))
|
||||||
|
|
||||||
|
# Now let's call a function defined in this module. On receiving the
|
||||||
|
# function call request, the child attempts to import __main__, which is
|
||||||
|
# initially missing, causing the importer in the child to request it from
|
||||||
|
# its parent. That causes _this script_ to be sent as the module source
|
||||||
|
# over the wire.
|
||||||
|
print("Calling md5sum(/etc/passwd) in the child:",
|
||||||
|
context.call(md5sum, '/etc/passwd'))
|
||||||
|
|
||||||
|
# Now let's "transfer" a file. The simplest way to do this is calling a
|
||||||
|
# function that returns the file data, which is totally fine for small
|
||||||
|
# files.
|
||||||
|
print("Download /etc/passwd via function call: %d bytes" % (
|
||||||
|
len(context.call(get_file_contents, '/etc/passwd'))
|
||||||
|
))
|
||||||
|
|
||||||
|
# And using function calls, in the other direction:
|
||||||
|
print("Upload /tmp/blah via function call: %s" % (
|
||||||
|
context.call(put_file_contents, '/tmp/blah', b'blah!'),
|
||||||
|
))
|
||||||
|
|
||||||
|
# Now lets transfer what might be a big files. The problem with big files
|
||||||
|
# is that they may not fit in RAM. This uses mitogen.services.FileService
|
||||||
|
# to implement streamy file transfer instead. The sender must have a
|
||||||
|
# 'service pool' running that will host FileService. First let's do the
|
||||||
|
# 'upload' direction, where the master hosts FileService.
|
||||||
|
|
||||||
|
# Steals the 'Router' reference from the context object. In a real app the
|
||||||
|
# pool would be constructed once at startup, this is just demo code.
|
||||||
|
file_service = mitogen.service.FileService(context.router)
|
||||||
|
|
||||||
|
# Start the pool.
|
||||||
|
pool = mitogen.service.Pool(context.router, services=[file_service])
|
||||||
|
|
||||||
|
# Grant access to a file on the local disk from unprivileged contexts.
|
||||||
|
# .register() is also exposed as a service method -- you can call it on a
|
||||||
|
# child context from any more privileged context.
|
||||||
|
file_service.register('/etc/passwd')
|
||||||
|
|
||||||
|
# Now call our wrapper function that knows how to handle the transfer. In a
|
||||||
|
# real app, this wrapper might also set ownership/modes or do any other
|
||||||
|
# app-specific stuff relating to the file that was transferred.
|
||||||
|
print("Streamy upload /etc/passwd: remote result: %s" % (
|
||||||
|
context.call(
|
||||||
|
streamy_download_file,
|
||||||
|
# To avoid hard-wiring streamy_download_file(), we want to pass it
|
||||||
|
# a Context object that hosts the file service it should request
|
||||||
|
# files from. Router.myself() returns a Context referring to this
|
||||||
|
# process.
|
||||||
|
context=router.myself(),
|
||||||
|
path='/etc/passwd',
|
||||||
|
),
|
||||||
|
))
|
||||||
|
|
||||||
|
# Shut down the pool now we're done with it, else app will hang at exit.
|
||||||
|
# Once again, this should only happen once at app startup/exit, not for
|
||||||
|
# every file transfer!
|
||||||
|
pool.stop(join=True)
|
||||||
|
|
||||||
|
# Now let's do the same thing but in reverse: we use FileService on the
|
||||||
|
# remote download a file. This uses context.call_service(), which invokes a
|
||||||
|
# special code path that causes auto-initialization of a thread pool in the
|
||||||
|
# target, and auto-construction of the target service, but only if the
|
||||||
|
# service call was made by a more privileged context. We could write a
|
||||||
|
# helper function that runs in the remote to do all that by hand, but the
|
||||||
|
# library handles it for us.
|
||||||
|
|
||||||
|
# Make the file accessible. A future FileService could avoid the need for
|
||||||
|
# this for privileged contexts.
|
||||||
|
context.call_service(
|
||||||
|
service_name=mitogen.service.FileService,
|
||||||
|
method_name='register',
|
||||||
|
path='/etc/passwd'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now we can use our streamy_download_file() function in reverse -- running
|
||||||
|
# it from this process and having it fetch from the remote process:
|
||||||
|
print("Streamy download /etc/passwd: result: %s" % (
|
||||||
|
streamy_download_file(context, '/etc/passwd'),
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# Setup logging. Mitogen produces a LOT of logging. Over the course of the
|
||||||
|
# stable series, Mitogen's loggers will be carved up so more selective /
|
||||||
|
# user-friendly logging is possible. mitogen.log_to_file() just sets up
|
||||||
|
# something basic, defaulting to INFO level, but you can override from the
|
||||||
|
# command-line by passing MITOGEN_LOG_LEVEL=debug or MITOGEN_LOG_LEVEL=io.
|
||||||
|
# IO logging is sometimes useful for hangs, but it is often creates more
|
||||||
|
# confusion than it solves.
|
||||||
|
mitogen.utils.log_to_file()
|
||||||
|
|
||||||
|
# Construct the Broker thread. It manages an async IO loop listening for
|
||||||
|
# reads from any active connection, or wakes from any non-Broker thread.
|
||||||
|
# Because Mitogen uses a background worker thread, it is extremely
|
||||||
|
# important to pay attention to the use of UNIX fork in your code --
|
||||||
|
# forking entails making a snapshot of the state of all locks in the
|
||||||
|
# program, including those in the logging module, and thus can create code
|
||||||
|
# that appears to work for a long time, before deadlocking randomly.
|
||||||
|
# Forking in a Mitogen app requires significant upfront planning!
|
||||||
|
broker = mitogen.master.Broker()
|
||||||
|
|
||||||
|
# Construct a Router. This accepts messages (mitogen.core.Message) and
|
||||||
|
# either dispatches locally addressed messages to local handlers (added via
|
||||||
|
# Router.add_handle()) on the broker thread, or forwards the message
|
||||||
|
# towards the target context.
|
||||||
|
|
||||||
|
# The router also acts as an uglyish God object for creating new
|
||||||
|
# connections. This was a design mistake, really those methods should be
|
||||||
|
# directly imported from e.g. 'mitogen.ssh'.
|
||||||
|
router = mitogen.master.Router(broker)
|
||||||
|
|
||||||
|
# Router can act like a context manager. It simply ensures
|
||||||
|
# Broker.shutdown() is called on exception / exit. That prevents the app
|
||||||
|
# hanging due to a forgotten background thread. For throwaway scripts,
|
||||||
|
# there are also decorator versions "@mitogen.main()" and
|
||||||
|
# "@mitogen.utils.with_router" that do the same thing with less typing.
|
||||||
|
with router:
|
||||||
|
# Now let's construct a context. The '.local()' constructor just creates
|
||||||
|
# the context as a subprocess, the simplest possible case.
|
||||||
|
child = router.local()
|
||||||
|
print("Created a context:", child)
|
||||||
|
print()
|
||||||
|
|
||||||
|
# This demonstrates the standard IO redirection. We call the print
|
||||||
|
# function in the remote context, that should cause a log message to be
|
||||||
|
# emitted. Any subprocesses started by the remote also get the same
|
||||||
|
# treatment, so it's very easy to spot otherwise discarded errors/etc.
|
||||||
|
# from remote tools.
|
||||||
|
child.call(print, "Hello from child.")
|
||||||
|
|
||||||
|
# Context objects make it semi-convenient to treat the local machine the
|
||||||
|
# same as a remote machine.
|
||||||
|
work_on_machine(child)
|
||||||
|
|
||||||
|
# Now let's construct a proxied context. We'll simply use the .local()
|
||||||
|
# constructor again, but construct it via 'child'. In effect we are
|
||||||
|
# constructing a sub-sub-process. Instead of .local() here, we could
|
||||||
|
# have used .sudo() or .ssh() or anything else.
|
||||||
|
subchild = router.local(via=child)
|
||||||
|
print()
|
||||||
|
print()
|
||||||
|
print()
|
||||||
|
print("Created a context as a child of another context:", subchild)
|
||||||
|
|
||||||
|
# Do everything again with the new child.
|
||||||
|
work_on_machine(subchild)
|
||||||
|
|
||||||
|
# We can selectively shut down individual children if we want:
|
||||||
|
subchild.shutdown(wait=True)
|
||||||
|
|
||||||
|
# Or we can simply fall off the end of the scope, effectively calling
|
||||||
|
# Broker.shutdown(), which causes all children to die as part of
|
||||||
|
# shutdown.
|
||||||
|
|
||||||
|
|
||||||
|
# The child module importer detects the execution guard below and removes any
|
||||||
|
# code appearing after it, and refuses to execute "__main__" if it is absent.
|
||||||
|
# This is necessary to prevent a common problem where people try to call
|
||||||
|
# functions defined in __main__ without first wrapping it up to be importable
|
||||||
|
# as a module, which previously hung the target, or caused bizarre recursive
|
||||||
|
# script runs.
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
@ -1,288 +0,0 @@
|
|||||||
# encoding: utf-8
|
|
||||||
"""Selected backports from Python stdlib functools module
|
|
||||||
"""
|
|
||||||
# Written by Nick Coghlan <ncoghlan at gmail.com>,
|
|
||||||
# Raymond Hettinger <python at rcn.com>,
|
|
||||||
# and Łukasz Langa <lukasz at langa.pl>.
|
|
||||||
# Copyright (C) 2006-2013 Python Software Foundation.
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'update_wrapper', 'wraps', 'WRAPPER_ASSIGNMENTS', 'WRAPPER_UPDATES',
|
|
||||||
'lru_cache',
|
|
||||||
]
|
|
||||||
|
|
||||||
from threading import RLock
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### update_wrapper() and wraps() decorator
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
# update_wrapper() and wraps() are tools to help write
|
|
||||||
# wrapper functions that can handle naive introspection
|
|
||||||
|
|
||||||
WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__',
|
|
||||||
'__annotations__')
|
|
||||||
WRAPPER_UPDATES = ('__dict__',)
|
|
||||||
def update_wrapper(wrapper,
|
|
||||||
wrapped,
|
|
||||||
assigned = WRAPPER_ASSIGNMENTS,
|
|
||||||
updated = WRAPPER_UPDATES):
|
|
||||||
"""Update a wrapper function to look like the wrapped function
|
|
||||||
wrapper is the function to be updated
|
|
||||||
wrapped is the original function
|
|
||||||
assigned is a tuple naming the attributes assigned directly
|
|
||||||
from the wrapped function to the wrapper function (defaults to
|
|
||||||
functools.WRAPPER_ASSIGNMENTS)
|
|
||||||
updated is a tuple naming the attributes of the wrapper that
|
|
||||||
are updated with the corresponding attribute from the wrapped
|
|
||||||
function (defaults to functools.WRAPPER_UPDATES)
|
|
||||||
"""
|
|
||||||
for attr in assigned:
|
|
||||||
try:
|
|
||||||
value = getattr(wrapped, attr)
|
|
||||||
except AttributeError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
setattr(wrapper, attr, value)
|
|
||||||
for attr in updated:
|
|
||||||
getattr(wrapper, attr).update(getattr(wrapped, attr, {}))
|
|
||||||
# Issue #17482: set __wrapped__ last so we don't inadvertently copy it
|
|
||||||
# from the wrapped function when updating __dict__
|
|
||||||
wrapper.__wrapped__ = wrapped
|
|
||||||
# Return the wrapper so this can be used as a decorator via partial()
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
def wraps(wrapped,
|
|
||||||
assigned = WRAPPER_ASSIGNMENTS,
|
|
||||||
updated = WRAPPER_UPDATES):
|
|
||||||
"""Decorator factory to apply update_wrapper() to a wrapper function
|
|
||||||
Returns a decorator that invokes update_wrapper() with the decorated
|
|
||||||
function as the wrapper argument and the arguments to wraps() as the
|
|
||||||
remaining arguments. Default arguments are as for update_wrapper().
|
|
||||||
This is a convenience function to simplify applying partial() to
|
|
||||||
update_wrapper().
|
|
||||||
"""
|
|
||||||
return partial(update_wrapper, wrapped=wrapped,
|
|
||||||
assigned=assigned, updated=updated)
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### partial() argument application
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
# Purely functional, no descriptor behaviour
|
|
||||||
def partial(func, *args, **keywords):
|
|
||||||
"""New function with partial application of the given arguments
|
|
||||||
and keywords.
|
|
||||||
"""
|
|
||||||
if hasattr(func, 'func'):
|
|
||||||
args = func.args + args
|
|
||||||
tmpkw = func.keywords.copy()
|
|
||||||
tmpkw.update(keywords)
|
|
||||||
keywords = tmpkw
|
|
||||||
del tmpkw
|
|
||||||
func = func.func
|
|
||||||
|
|
||||||
def newfunc(*fargs, **fkeywords):
|
|
||||||
newkeywords = keywords.copy()
|
|
||||||
newkeywords.update(fkeywords)
|
|
||||||
return func(*(args + fargs), **newkeywords)
|
|
||||||
newfunc.func = func
|
|
||||||
newfunc.args = args
|
|
||||||
newfunc.keywords = keywords
|
|
||||||
return newfunc
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### LRU Cache function decorator
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
class _HashedSeq(list):
|
|
||||||
""" This class guarantees that hash() will be called no more than once
|
|
||||||
per element. This is important because the lru_cache() will hash
|
|
||||||
the key multiple times on a cache miss.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = 'hashvalue'
|
|
||||||
|
|
||||||
def __init__(self, tup, hash=hash):
|
|
||||||
self[:] = tup
|
|
||||||
self.hashvalue = hash(tup)
|
|
||||||
|
|
||||||
def __hash__(self):
|
|
||||||
return self.hashvalue
|
|
||||||
|
|
||||||
def _make_key(args, kwds, typed,
|
|
||||||
kwd_mark = (object(),),
|
|
||||||
fasttypes = set([int, str, frozenset, type(None)]),
|
|
||||||
sorted=sorted, tuple=tuple, type=type, len=len):
|
|
||||||
"""Make a cache key from optionally typed positional and keyword arguments
|
|
||||||
The key is constructed in a way that is flat as possible rather than
|
|
||||||
as a nested structure that would take more memory.
|
|
||||||
If there is only a single argument and its data type is known to cache
|
|
||||||
its hash value, then that argument is returned without a wrapper. This
|
|
||||||
saves space and improves lookup speed.
|
|
||||||
"""
|
|
||||||
key = args
|
|
||||||
if kwds:
|
|
||||||
sorted_items = sorted(kwds.items())
|
|
||||||
key += kwd_mark
|
|
||||||
for item in sorted_items:
|
|
||||||
key += item
|
|
||||||
if typed:
|
|
||||||
key += tuple(type(v) for v in args)
|
|
||||||
if kwds:
|
|
||||||
key += tuple(type(v) for k, v in sorted_items)
|
|
||||||
elif len(key) == 1 and type(key[0]) in fasttypes:
|
|
||||||
return key[0]
|
|
||||||
return _HashedSeq(key)
|
|
||||||
|
|
||||||
def lru_cache(maxsize=128, typed=False):
|
|
||||||
"""Least-recently-used cache decorator.
|
|
||||||
If *maxsize* is set to None, the LRU features are disabled and the cache
|
|
||||||
can grow without bound.
|
|
||||||
If *typed* is True, arguments of different types will be cached separately.
|
|
||||||
For example, f(3.0) and f(3) will be treated as distinct calls with
|
|
||||||
distinct results.
|
|
||||||
Arguments to the cached function must be hashable.
|
|
||||||
View the cache statistics named tuple (hits, misses, maxsize, currsize)
|
|
||||||
with f.cache_info(). Clear the cache and statistics with f.cache_clear().
|
|
||||||
Access the underlying function with f.__wrapped__.
|
|
||||||
See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Users should only access the lru_cache through its public API:
|
|
||||||
# cache_info, cache_clear, and f.__wrapped__
|
|
||||||
# The internals of the lru_cache are encapsulated for thread safety and
|
|
||||||
# to allow the implementation to change (including a possible C version).
|
|
||||||
|
|
||||||
# Early detection of an erroneous call to @lru_cache without any arguments
|
|
||||||
# resulting in the inner function being passed to maxsize instead of an
|
|
||||||
# integer or None.
|
|
||||||
if maxsize is not None and not isinstance(maxsize, int):
|
|
||||||
raise TypeError('Expected maxsize to be an integer or None')
|
|
||||||
|
|
||||||
def decorating_function(user_function):
|
|
||||||
wrapper = _lru_cache_wrapper(user_function, maxsize, typed)
|
|
||||||
return update_wrapper(wrapper, user_function)
|
|
||||||
|
|
||||||
return decorating_function
|
|
||||||
|
|
||||||
def _lru_cache_wrapper(user_function, maxsize, typed):
|
|
||||||
# Constants shared by all lru cache instances:
|
|
||||||
sentinel = object() # unique object used to signal cache misses
|
|
||||||
make_key = _make_key # build a key from the function arguments
|
|
||||||
PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 # names for the link fields
|
|
||||||
|
|
||||||
cache = {}
|
|
||||||
cache_get = cache.get # bound method to lookup a key or return None
|
|
||||||
lock = RLock() # because linkedlist updates aren't threadsafe
|
|
||||||
root = [] # root of the circular doubly linked list
|
|
||||||
root[:] = [root, root, None, None] # initialize by pointing to self
|
|
||||||
hits_misses_full_root = [0, 0, False, root]
|
|
||||||
HITS,MISSES,FULL,ROOT = 0, 1, 2, 3
|
|
||||||
|
|
||||||
if maxsize == 0:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# No caching -- just a statistics update after a successful call
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
hits_misses_full_root[MISSES] += 1
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif maxsize is None:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# Simple caching without ordering or size limit
|
|
||||||
key = make_key(args, kwds, typed)
|
|
||||||
result = cache_get(key, sentinel)
|
|
||||||
if result is not sentinel:
|
|
||||||
hits_misses_full_root[HITS] += 1
|
|
||||||
return result
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
cache[key] = result
|
|
||||||
hits_misses_full_root[MISSES] += 1
|
|
||||||
return result
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# Size limited caching that tracks accesses by recency
|
|
||||||
key = make_key(args, kwds, typed)
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
link = cache_get(key)
|
|
||||||
if link is not None:
|
|
||||||
# Move the link to the front of the circular queue
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
link_prev, link_next, _key, result = link
|
|
||||||
link_prev[NEXT] = link_next
|
|
||||||
link_next[PREV] = link_prev
|
|
||||||
last = root[PREV]
|
|
||||||
last[NEXT] = root[PREV] = link
|
|
||||||
link[PREV] = last
|
|
||||||
link[NEXT] = root
|
|
||||||
hits_misses_full_root[HITS] += 1
|
|
||||||
return result
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
if key in cache:
|
|
||||||
# Getting here means that this same key was added to the
|
|
||||||
# cache while the lock was released. Since the link
|
|
||||||
# update is already done, we need only return the
|
|
||||||
# computed result and update the count of misses.
|
|
||||||
pass
|
|
||||||
elif hits_misses_full_root[FULL]:
|
|
||||||
# Use the old root to store the new key and result.
|
|
||||||
oldroot = root = hits_misses_full_root[ROOT]
|
|
||||||
oldroot[KEY] = key
|
|
||||||
oldroot[RESULT] = result
|
|
||||||
# Empty the oldest link and make it the new root.
|
|
||||||
# Keep a reference to the old key and old result to
|
|
||||||
# prevent their ref counts from going to zero during the
|
|
||||||
# update. That will prevent potentially arbitrary object
|
|
||||||
# clean-up code (i.e. __del__) from running while we're
|
|
||||||
# still adjusting the links.
|
|
||||||
root = hits_misses_full_root[ROOT] = oldroot[NEXT]
|
|
||||||
oldkey = root[KEY]
|
|
||||||
oldresult = root[RESULT]
|
|
||||||
root[KEY] = root[RESULT] = None
|
|
||||||
# Now update the cache dictionary.
|
|
||||||
del cache[oldkey]
|
|
||||||
# Save the potentially reentrant cache[key] assignment
|
|
||||||
# for last, after the root and links have been put in
|
|
||||||
# a consistent state.
|
|
||||||
cache[key] = oldroot
|
|
||||||
else:
|
|
||||||
# Put result in a new link at the front of the queue.
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
last = root[PREV]
|
|
||||||
link = [last, root, key, result]
|
|
||||||
last[NEXT] = root[PREV] = cache[key] = link
|
|
||||||
# Use the __len__() method instead of the len() function
|
|
||||||
# which could potentially be wrapped in an lru_cache itself.
|
|
||||||
hits_misses_full_root[FULL] = (cache.__len__() >= maxsize)
|
|
||||||
hits_misses_full_root[MISSES]
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def cache_clear():
|
|
||||||
"""Clear the cache and cache statistics"""
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
cache.clear()
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
root[:] = [root, root, None, None]
|
|
||||||
hits_misses_full[HITS] = 0
|
|
||||||
hits_misses_full[MISSES] = 0
|
|
||||||
hits_misses_full[FULL] = False
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
|
|
||||||
wrapper.cache_clear = cache_clear
|
|
||||||
return wrapper
|
|
@ -0,0 +1,166 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
# !mitogen: minify_safe
|
||||||
|
|
||||||
|
"""mitogen.profiler
|
||||||
|
Record and report cProfile statistics from a run. Creates one aggregated
|
||||||
|
output file, one aggregate containing only workers, and one for the
|
||||||
|
top-level process.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
mitogen.profiler record <dest_path> <tool> [args ..]
|
||||||
|
mitogen.profiler report <dest_path> [sort_mode]
|
||||||
|
mitogen.profiler stat <sort_mode> <tool> [args ..]
|
||||||
|
|
||||||
|
Mode:
|
||||||
|
record: Record a trace.
|
||||||
|
report: Report on a previously recorded trace.
|
||||||
|
stat: Record and report in a single step.
|
||||||
|
|
||||||
|
Where:
|
||||||
|
dest_path: Filesystem prefix to write .pstats files to.
|
||||||
|
sort_mode: Sorting mode; defaults to "cumulative". See:
|
||||||
|
https://docs.python.org/2/library/profile.html#pstats.Stats.sort_stats
|
||||||
|
|
||||||
|
Example:
|
||||||
|
mitogen.profiler record /tmp/mypatch ansible-playbook foo.yml
|
||||||
|
mitogen.profiler dump /tmp/mypatch-worker.pstats
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
import os
|
||||||
|
import pstats
|
||||||
|
import cProfile
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
|
||||||
|
import mitogen.core
|
||||||
|
|
||||||
|
|
||||||
|
def try_merge(stats, path):
|
||||||
|
try:
|
||||||
|
stats.add(path)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print('Failed. Race? Will retry. %s' % (e,))
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def merge_stats(outpath, inpaths):
|
||||||
|
first, rest = inpaths[0], inpaths[1:]
|
||||||
|
for x in range(5):
|
||||||
|
try:
|
||||||
|
stats = pstats.Stats(first)
|
||||||
|
except EOFError:
|
||||||
|
time.sleep(0.2)
|
||||||
|
continue
|
||||||
|
|
||||||
|
print("Writing %r..." % (outpath,))
|
||||||
|
for path in rest:
|
||||||
|
#print("Merging %r into %r.." % (os.path.basename(path), outpath))
|
||||||
|
for x in range(5):
|
||||||
|
if try_merge(stats, path):
|
||||||
|
break
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
stats.dump_stats(outpath)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_stats(outpath, tmpdir):
|
||||||
|
print('Generating stats..')
|
||||||
|
all_paths = []
|
||||||
|
paths_by_ident = {}
|
||||||
|
|
||||||
|
for name in os.listdir(tmpdir):
|
||||||
|
if name.endswith('-dump.pstats'):
|
||||||
|
ident, _, pid = name.partition('-')
|
||||||
|
path = os.path.join(tmpdir, name)
|
||||||
|
all_paths.append(path)
|
||||||
|
paths_by_ident.setdefault(ident, []).append(path)
|
||||||
|
|
||||||
|
merge_stats('%s-all.pstat' % (outpath,), all_paths)
|
||||||
|
for ident, paths in paths_by_ident.items():
|
||||||
|
merge_stats('%s-%s.pstat' % (outpath, ident), paths)
|
||||||
|
|
||||||
|
|
||||||
|
def do_record(tmpdir, path, *args):
|
||||||
|
env = os.environ.copy()
|
||||||
|
fmt = '%(identity)s-%(pid)s.%(now)s-dump.%(ext)s'
|
||||||
|
env['MITOGEN_PROFILING'] = '1'
|
||||||
|
env['MITOGEN_PROFILE_FMT'] = os.path.join(tmpdir, fmt)
|
||||||
|
rc = subprocess.call(args, env=env)
|
||||||
|
generate_stats(path, tmpdir)
|
||||||
|
return rc
|
||||||
|
|
||||||
|
|
||||||
|
def do_report(tmpdir, path, sort='cumulative'):
|
||||||
|
stats = pstats.Stats(path).sort_stats(sort)
|
||||||
|
stats.print_stats(100)
|
||||||
|
|
||||||
|
|
||||||
|
def do_stat(tmpdir, sort, *args):
|
||||||
|
valid_sorts = pstats.Stats.sort_arg_dict_default
|
||||||
|
if sort not in valid_sorts:
|
||||||
|
sys.stderr.write('Invalid sort %r, must be one of %s\n' %
|
||||||
|
(sort, ', '.join(sorted(valid_sorts))))
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
outfile = os.path.join(tmpdir, 'combined')
|
||||||
|
do_record(tmpdir, outfile, *args)
|
||||||
|
aggs = ('app.main', 'mitogen.broker', 'mitogen.child_main',
|
||||||
|
'mitogen.service.pool', 'Strategy', 'WorkerProcess',
|
||||||
|
'all')
|
||||||
|
for agg in aggs:
|
||||||
|
path = '%s-%s.pstat' % (outfile, agg)
|
||||||
|
if os.path.exists(path):
|
||||||
|
print()
|
||||||
|
print()
|
||||||
|
print('------ Aggregation %r ------' % (agg,))
|
||||||
|
print()
|
||||||
|
do_report(tmpdir, path, sort)
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
if len(sys.argv) < 2 or sys.argv[1] not in ('record', 'report', 'stat'):
|
||||||
|
sys.stderr.write(__doc__)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
func = globals()['do_' + sys.argv[1]]
|
||||||
|
tmpdir = tempfile.mkdtemp(prefix='mitogen.profiler')
|
||||||
|
try:
|
||||||
|
sys.exit(func(tmpdir, *sys.argv[2:]) or 0)
|
||||||
|
finally:
|
||||||
|
shutil.rmtree(tmpdir)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
@ -0,0 +1,40 @@
|
|||||||
|
|
||||||
|
# issue #429: tool for extracting keys out of message catalogs and turning them
|
||||||
|
# into the big gob of base64 as used in mitogen/sudo.py
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# - apt-get source libpam0g
|
||||||
|
# - cd */po/
|
||||||
|
# - python ~/pogrep.py "Password: "
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import shlex
|
||||||
|
import glob
|
||||||
|
|
||||||
|
|
||||||
|
last_word = None
|
||||||
|
|
||||||
|
for path in glob.glob('*.po'):
|
||||||
|
for line in open(path):
|
||||||
|
bits = shlex.split(line, comments=True)
|
||||||
|
if not bits:
|
||||||
|
continue
|
||||||
|
|
||||||
|
word = bits[0]
|
||||||
|
if len(bits) < 2 or not word:
|
||||||
|
continue
|
||||||
|
|
||||||
|
rest = bits[1]
|
||||||
|
if not rest:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if last_word == 'msgid' and word == 'msgstr':
|
||||||
|
if last_rest == sys.argv[1]:
|
||||||
|
thing = rest.rstrip(': ').decode('utf-8').lower().encode('utf-8').encode('base64').rstrip()
|
||||||
|
print ' %-60s # %s' % (repr(thing)+',', path)
|
||||||
|
|
||||||
|
last_word = word
|
||||||
|
last_rest = rest
|
||||||
|
|
||||||
|
#ag -A 1 'msgid "Password: "'|less | grep msgstr | grep -v '""'|cut -d'"' -f2|cut -d'"' -f1| tr -d :
|
||||||
|
|