Merge remote-tracking branch 'origin/dmw'
* origin/dmw: tests: add new compression parameter to mitogen_get_stack results tests: disable affinity_test on Travis :/ issue #508: fix responder stats test due to new smaller parent.py. issue #508: tests: skip minify_test Py2.4/2.5 for profiler.py. tests: fix fallout from 36fb318adf5c56e729296c3efce84f4dd75ced4e issue #520: add AIX auth failure string to su. tests: move affinity_test to Ansible tests. core: cProfile is not available in 2.4. issue #505: docs: add new detail graph for one scenario. docs: update and re-record profile graphs in docs; closes #505 service: fix PushFileService exception tests: pad out localhost-* service: start pool shutdown on broker shutdown. master: .encode() needed for Py3. ansible: stash PID files in CWD if requested for debugging. issue #508: master: minify_safe_re must be bytes for Py3. bench: tidy up and cpu-pin some more files. tests: add localhost-x100 ansible: double the default pool size. ansible: raise error with correct exception type. issue #508: master: minify all Mitogen/ansible_mitogen sources. parent: PartialZlib docstrings. ansible: hacky parser to alow bools to be specified on command line parent: pre-cache bootstrap if possible. docs: update Changelog. ansible: add mitogen_ssh_compression variable. service: PushFileService never recorded a file as sent. parent: synchronize get_core_source() service: use correct profile aggregation name. SyntaxError. ansible: don't pin controller if <4 cores. tests: make soak testing work reliably on vanilla. docs: changelog tidyups. ansible: document and make affinity stuff portable to non-Linux ansible: fix affinity.py test failure on 2 cores. ansible: preheat PluginLoader caches before fork. tests: make mitogen_shutdown_all be run_once by default. docs: update Changelog. ansible: use Poller for WorkerProcess; closes #491. ansible: new multiplexer/workers configuration docs: update Changelog. docs: update Changelog. ansible: pin connection multiplexer to a single core utils: pad out reset_affinity() and integrate with detach_popen() utils: import reset_affinity() function. master: set Router.profiling if MITOGEN_PROFILING variable present. parent: don't kill children when profiling is active. ansible: hook strategy and worker processes into profiler profiler: import from linear2 branch core: tidy up existing profiling code and support MITOGEN_PROFILE_FMT issue #260: redundant if statement. ansible: ensure MuxProcess MITOGEN_PROFILING results reach disk. ansible/bench: make end= configurable. master: cache sent/forwarded module namespull/564/head
@ -0,0 +1,241 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
"""
|
||||||
|
As Mitogen separates asynchronous IO out to a broker thread, communication
|
||||||
|
necessarily involves context switching and waking that thread. When application
|
||||||
|
threads and the broker share a CPU, this can be almost invisibly fast - around
|
||||||
|
25 microseconds for a full A->B->A round-trip.
|
||||||
|
|
||||||
|
However when threads are scheduled on different CPUs, round-trip delays
|
||||||
|
regularly vary wildly, and easily into milliseconds. Many contributing factors
|
||||||
|
exist, not least scenarios like:
|
||||||
|
|
||||||
|
1. A is preempted immediately after waking B, but before releasing the GIL.
|
||||||
|
2. B wakes from IO wait only to immediately enter futex wait.
|
||||||
|
3. A may wait 10ms or more for another timeslice, as the scheduler on its CPU
|
||||||
|
runs threads unrelated to its transaction (i.e. not B), wake only to release
|
||||||
|
its GIL, before entering IO sleep waiting for a reply from B, which cannot
|
||||||
|
exist yet.
|
||||||
|
4. B wakes, acquires GIL, performs work, and sends reply to A, causing it to
|
||||||
|
wake. B is preempted before releasing GIL.
|
||||||
|
5. A wakes from IO wait only to immediately enter futex wait.
|
||||||
|
6. B may wait 10ms or more for another timeslice, wake only to release its GIL,
|
||||||
|
before sleeping again.
|
||||||
|
7. A wakes, acquires GIL, finally receives reply.
|
||||||
|
|
||||||
|
Per above if we are unlucky, on an even moderately busy machine it is possible
|
||||||
|
to lose milliseconds just in scheduling delay, and the effect is compounded
|
||||||
|
when pairs of threads in process A are communicating with pairs of threads in
|
||||||
|
process B using the same scheme, such as when Ansible WorkerProcess is
|
||||||
|
communicating with ContextService in the connection multiplexer. In the worst
|
||||||
|
case it could involve 4 threads working in lockstep spread across 4 busy CPUs.
|
||||||
|
|
||||||
|
Since multithreading in Python is essentially useless except for waiting on IO
|
||||||
|
due to the presence of the GIL, at least in Ansible there is no good reason for
|
||||||
|
threads in the same process to run on distinct CPUs - they always operate in
|
||||||
|
lockstep due to the GIL, and are thus vulnerable to issues like above.
|
||||||
|
|
||||||
|
Linux lacks any natural API to describe what we want, it only permits
|
||||||
|
individual threads to be constrained to run on specific CPUs, and for that
|
||||||
|
constraint to be inherited by new threads and forks of the constrained thread.
|
||||||
|
|
||||||
|
This module therefore implements a CPU pinning policy for Ansible processes,
|
||||||
|
providing methods that should be called early in any new process, either to
|
||||||
|
rebalance which CPU it is pinned to, or in the case of subprocesses, to remove
|
||||||
|
the pinning entirely. It is likely to require ongoing tweaking, since pinning
|
||||||
|
necessarily involves preventing the scheduler from making load balancing
|
||||||
|
decisions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import ctypes
|
||||||
|
import mmap
|
||||||
|
import multiprocessing
|
||||||
|
import os
|
||||||
|
import struct
|
||||||
|
|
||||||
|
import mitogen.parent
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
_libc = ctypes.CDLL(None, use_errno=True)
|
||||||
|
_strerror = _libc.strerror
|
||||||
|
_strerror.restype = ctypes.c_char_p
|
||||||
|
_pthread_mutex_init = _libc.pthread_mutex_init
|
||||||
|
_pthread_mutex_lock = _libc.pthread_mutex_lock
|
||||||
|
_pthread_mutex_unlock = _libc.pthread_mutex_unlock
|
||||||
|
_sched_setaffinity = _libc.sched_setaffinity
|
||||||
|
except (OSError, AttributeError):
|
||||||
|
_libc = None
|
||||||
|
_strerror = None
|
||||||
|
_pthread_mutex_init = None
|
||||||
|
_pthread_mutex_lock = None
|
||||||
|
_pthread_mutex_unlock = None
|
||||||
|
_sched_setaffinity = None
|
||||||
|
|
||||||
|
|
||||||
|
class pthread_mutex_t(ctypes.Structure):
|
||||||
|
"""
|
||||||
|
Wrap pthread_mutex_t to allow storing a lock in shared memory.
|
||||||
|
"""
|
||||||
|
_fields_ = [
|
||||||
|
('data', ctypes.c_uint8 * 512),
|
||||||
|
]
|
||||||
|
|
||||||
|
def init(self):
|
||||||
|
if _pthread_mutex_init(self.data, 0):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
def acquire(self):
|
||||||
|
if _pthread_mutex_lock(self.data):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
if _pthread_mutex_unlock(self.data):
|
||||||
|
raise Exception(_strerror(ctypes.get_errno()))
|
||||||
|
|
||||||
|
|
||||||
|
class State(ctypes.Structure):
|
||||||
|
"""
|
||||||
|
Contents of shared memory segment. This allows :meth:`Manager.assign` to be
|
||||||
|
called from any child, since affinity assignment must happen from within
|
||||||
|
the context of the new child process.
|
||||||
|
"""
|
||||||
|
_fields_ = [
|
||||||
|
('lock', pthread_mutex_t),
|
||||||
|
('counter', ctypes.c_uint8),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
class Policy(object):
|
||||||
|
"""
|
||||||
|
Process affinity policy.
|
||||||
|
"""
|
||||||
|
def assign_controller(self):
|
||||||
|
"""
|
||||||
|
Assign the Ansible top-level policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_muxprocess(self):
|
||||||
|
"""
|
||||||
|
Assign the MuxProcess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_worker(self):
|
||||||
|
"""
|
||||||
|
Assign the WorkerProcess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def assign_subprocess(self):
|
||||||
|
"""
|
||||||
|
Assign the helper subprocess policy to this process.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class LinuxPolicy(Policy):
|
||||||
|
"""
|
||||||
|
:class:`Policy` for Linux machines. The scheme here was tested on an
|
||||||
|
otherwise idle 16 thread machine.
|
||||||
|
|
||||||
|
- The connection multiplexer is pinned to CPU 0.
|
||||||
|
- The Ansible top-level (strategy) is pinned to CPU 1.
|
||||||
|
- WorkerProcesses are pinned sequentually to 2..N, wrapping around when no
|
||||||
|
more CPUs exist.
|
||||||
|
- Children such as SSH may be scheduled on any CPU except 0/1.
|
||||||
|
|
||||||
|
If the machine has less than 4 cores available, the top-level and workers
|
||||||
|
are pinned between CPU 2..N, i.e. no CPU is reserved for the top-level
|
||||||
|
process.
|
||||||
|
|
||||||
|
This could at least be improved by having workers pinned to independent
|
||||||
|
cores, before reusing the second hyperthread of an existing core.
|
||||||
|
|
||||||
|
A hook is installed that causes :meth:`reset` to run in the child of any
|
||||||
|
process created with :func:`mitogen.parent.detach_popen`, ensuring
|
||||||
|
CPU-intensive children like SSH are not forced to share the same core as
|
||||||
|
the (otherwise potentially very busy) parent.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
self.mem = mmap.mmap(-1, 4096)
|
||||||
|
self.state = State.from_buffer(self.mem)
|
||||||
|
self.state.lock.init()
|
||||||
|
if self._cpu_count() < 4:
|
||||||
|
self._reserve_mask = 3
|
||||||
|
self._reserve_shift = 2
|
||||||
|
self._reserve_controller = True
|
||||||
|
else:
|
||||||
|
self._reserve_mask = 1
|
||||||
|
self._reserve_shift = 1
|
||||||
|
self._reserve_controller = False
|
||||||
|
|
||||||
|
def _set_affinity(self, mask):
|
||||||
|
mitogen.parent._preexec_hook = self._clear
|
||||||
|
s = struct.pack('L', mask)
|
||||||
|
_sched_setaffinity(os.getpid(), len(s), s)
|
||||||
|
|
||||||
|
def _cpu_count(self):
|
||||||
|
return multiprocessing.cpu_count()
|
||||||
|
|
||||||
|
def _balance(self):
|
||||||
|
self.state.lock.acquire()
|
||||||
|
try:
|
||||||
|
n = self.state.counter
|
||||||
|
self.state.counter += 1
|
||||||
|
finally:
|
||||||
|
self.state.lock.release()
|
||||||
|
|
||||||
|
self._set_cpu(self._reserve_shift + (
|
||||||
|
(n % max(1, (self._cpu_count() - self._reserve_shift)))
|
||||||
|
))
|
||||||
|
|
||||||
|
def _set_cpu(self, cpu):
|
||||||
|
self._set_affinity(1 << cpu)
|
||||||
|
|
||||||
|
def _clear(self):
|
||||||
|
self._set_affinity(0xffffffff & ~self._reserve_mask)
|
||||||
|
|
||||||
|
def assign_controller(self):
|
||||||
|
if self._reserve_controller:
|
||||||
|
self._set_cpu(1)
|
||||||
|
else:
|
||||||
|
self._balance()
|
||||||
|
|
||||||
|
def assign_muxprocess(self):
|
||||||
|
self._set_cpu(0)
|
||||||
|
|
||||||
|
def assign_worker(self):
|
||||||
|
self._balance()
|
||||||
|
|
||||||
|
def assign_subprocess(self):
|
||||||
|
self._clear()
|
||||||
|
|
||||||
|
|
||||||
|
if _sched_setaffinity is not None:
|
||||||
|
policy = LinuxPolicy()
|
||||||
|
else:
|
||||||
|
policy = Policy()
|
Before Width: | Height: | Size: 8.0 KiB After Width: | Height: | Size: 8.0 KiB |
Before Width: | Height: | Size: 60 KiB |
@ -1,3 +1,3 @@
|
|||||||
*.pcapng filter=lfs diff=lfs merge=lfs -text
|
**pcap** filter=lfs diff=lfs merge=lfs -text
|
||||||
run_hostname_100_times_mito.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
run_hostname_100_times_mito.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
run_hostname_100_times_vanilla.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
run_hostname_100_times_vanilla.pcap.gz filter=lfs diff=lfs merge=lfs -text
|
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 36 KiB |
@ -0,0 +1,16 @@
|
|||||||
|
|
||||||
|
import sys
|
||||||
|
# Add viewBox attr to SVGs lacking it, so IE scales properly.
|
||||||
|
|
||||||
|
import lxml.etree
|
||||||
|
import glob
|
||||||
|
|
||||||
|
|
||||||
|
for name in sys.argv[1:]: # glob.glob('*/*.svg'): #+ glob.glob('images/ansible/*.svg'):
|
||||||
|
doc = lxml.etree.parse(open(name))
|
||||||
|
svg = doc.getroot()
|
||||||
|
for elem in svg.cssselect('[stroke-width]'):
|
||||||
|
if elem.attrib['stroke-width'] < '2':
|
||||||
|
elem.attrib['stroke-width'] = '2'
|
||||||
|
|
||||||
|
open(name, 'w').write(lxml.etree.tostring(svg, xml_declaration=True, encoding='UTF-8'))
|
Before Width: | Height: | Size: 84 KiB |
Before Width: | Height: | Size: 99 KiB |
@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:6d9b4d4ff263003bd16e44c265783e7c1deff19950e453e3adeb8a6ab5052081
|
|
||||||
size 175120
|
|
@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:7a993832501b7948a38c2e8ced3467d1cb04279a57ddd6afc735ed96ec509c08
|
|
||||||
size 7623337
|
|
@ -1,288 +0,0 @@
|
|||||||
# encoding: utf-8
|
|
||||||
"""Selected backports from Python stdlib functools module
|
|
||||||
"""
|
|
||||||
# Written by Nick Coghlan <ncoghlan at gmail.com>,
|
|
||||||
# Raymond Hettinger <python at rcn.com>,
|
|
||||||
# and Łukasz Langa <lukasz at langa.pl>.
|
|
||||||
# Copyright (C) 2006-2013 Python Software Foundation.
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'update_wrapper', 'wraps', 'WRAPPER_ASSIGNMENTS', 'WRAPPER_UPDATES',
|
|
||||||
'lru_cache',
|
|
||||||
]
|
|
||||||
|
|
||||||
from threading import RLock
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### update_wrapper() and wraps() decorator
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
# update_wrapper() and wraps() are tools to help write
|
|
||||||
# wrapper functions that can handle naive introspection
|
|
||||||
|
|
||||||
WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__',
|
|
||||||
'__annotations__')
|
|
||||||
WRAPPER_UPDATES = ('__dict__',)
|
|
||||||
def update_wrapper(wrapper,
|
|
||||||
wrapped,
|
|
||||||
assigned = WRAPPER_ASSIGNMENTS,
|
|
||||||
updated = WRAPPER_UPDATES):
|
|
||||||
"""Update a wrapper function to look like the wrapped function
|
|
||||||
wrapper is the function to be updated
|
|
||||||
wrapped is the original function
|
|
||||||
assigned is a tuple naming the attributes assigned directly
|
|
||||||
from the wrapped function to the wrapper function (defaults to
|
|
||||||
functools.WRAPPER_ASSIGNMENTS)
|
|
||||||
updated is a tuple naming the attributes of the wrapper that
|
|
||||||
are updated with the corresponding attribute from the wrapped
|
|
||||||
function (defaults to functools.WRAPPER_UPDATES)
|
|
||||||
"""
|
|
||||||
for attr in assigned:
|
|
||||||
try:
|
|
||||||
value = getattr(wrapped, attr)
|
|
||||||
except AttributeError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
setattr(wrapper, attr, value)
|
|
||||||
for attr in updated:
|
|
||||||
getattr(wrapper, attr).update(getattr(wrapped, attr, {}))
|
|
||||||
# Issue #17482: set __wrapped__ last so we don't inadvertently copy it
|
|
||||||
# from the wrapped function when updating __dict__
|
|
||||||
wrapper.__wrapped__ = wrapped
|
|
||||||
# Return the wrapper so this can be used as a decorator via partial()
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
def wraps(wrapped,
|
|
||||||
assigned = WRAPPER_ASSIGNMENTS,
|
|
||||||
updated = WRAPPER_UPDATES):
|
|
||||||
"""Decorator factory to apply update_wrapper() to a wrapper function
|
|
||||||
Returns a decorator that invokes update_wrapper() with the decorated
|
|
||||||
function as the wrapper argument and the arguments to wraps() as the
|
|
||||||
remaining arguments. Default arguments are as for update_wrapper().
|
|
||||||
This is a convenience function to simplify applying partial() to
|
|
||||||
update_wrapper().
|
|
||||||
"""
|
|
||||||
return partial(update_wrapper, wrapped=wrapped,
|
|
||||||
assigned=assigned, updated=updated)
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### partial() argument application
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
# Purely functional, no descriptor behaviour
|
|
||||||
def partial(func, *args, **keywords):
|
|
||||||
"""New function with partial application of the given arguments
|
|
||||||
and keywords.
|
|
||||||
"""
|
|
||||||
if hasattr(func, 'func'):
|
|
||||||
args = func.args + args
|
|
||||||
tmpkw = func.keywords.copy()
|
|
||||||
tmpkw.update(keywords)
|
|
||||||
keywords = tmpkw
|
|
||||||
del tmpkw
|
|
||||||
func = func.func
|
|
||||||
|
|
||||||
def newfunc(*fargs, **fkeywords):
|
|
||||||
newkeywords = keywords.copy()
|
|
||||||
newkeywords.update(fkeywords)
|
|
||||||
return func(*(args + fargs), **newkeywords)
|
|
||||||
newfunc.func = func
|
|
||||||
newfunc.args = args
|
|
||||||
newfunc.keywords = keywords
|
|
||||||
return newfunc
|
|
||||||
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### LRU Cache function decorator
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
class _HashedSeq(list):
|
|
||||||
""" This class guarantees that hash() will be called no more than once
|
|
||||||
per element. This is important because the lru_cache() will hash
|
|
||||||
the key multiple times on a cache miss.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = 'hashvalue'
|
|
||||||
|
|
||||||
def __init__(self, tup, hash=hash):
|
|
||||||
self[:] = tup
|
|
||||||
self.hashvalue = hash(tup)
|
|
||||||
|
|
||||||
def __hash__(self):
|
|
||||||
return self.hashvalue
|
|
||||||
|
|
||||||
def _make_key(args, kwds, typed,
|
|
||||||
kwd_mark = (object(),),
|
|
||||||
fasttypes = set([int, str, frozenset, type(None)]),
|
|
||||||
sorted=sorted, tuple=tuple, type=type, len=len):
|
|
||||||
"""Make a cache key from optionally typed positional and keyword arguments
|
|
||||||
The key is constructed in a way that is flat as possible rather than
|
|
||||||
as a nested structure that would take more memory.
|
|
||||||
If there is only a single argument and its data type is known to cache
|
|
||||||
its hash value, then that argument is returned without a wrapper. This
|
|
||||||
saves space and improves lookup speed.
|
|
||||||
"""
|
|
||||||
key = args
|
|
||||||
if kwds:
|
|
||||||
sorted_items = sorted(kwds.items())
|
|
||||||
key += kwd_mark
|
|
||||||
for item in sorted_items:
|
|
||||||
key += item
|
|
||||||
if typed:
|
|
||||||
key += tuple(type(v) for v in args)
|
|
||||||
if kwds:
|
|
||||||
key += tuple(type(v) for k, v in sorted_items)
|
|
||||||
elif len(key) == 1 and type(key[0]) in fasttypes:
|
|
||||||
return key[0]
|
|
||||||
return _HashedSeq(key)
|
|
||||||
|
|
||||||
def lru_cache(maxsize=128, typed=False):
|
|
||||||
"""Least-recently-used cache decorator.
|
|
||||||
If *maxsize* is set to None, the LRU features are disabled and the cache
|
|
||||||
can grow without bound.
|
|
||||||
If *typed* is True, arguments of different types will be cached separately.
|
|
||||||
For example, f(3.0) and f(3) will be treated as distinct calls with
|
|
||||||
distinct results.
|
|
||||||
Arguments to the cached function must be hashable.
|
|
||||||
View the cache statistics named tuple (hits, misses, maxsize, currsize)
|
|
||||||
with f.cache_info(). Clear the cache and statistics with f.cache_clear().
|
|
||||||
Access the underlying function with f.__wrapped__.
|
|
||||||
See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Users should only access the lru_cache through its public API:
|
|
||||||
# cache_info, cache_clear, and f.__wrapped__
|
|
||||||
# The internals of the lru_cache are encapsulated for thread safety and
|
|
||||||
# to allow the implementation to change (including a possible C version).
|
|
||||||
|
|
||||||
# Early detection of an erroneous call to @lru_cache without any arguments
|
|
||||||
# resulting in the inner function being passed to maxsize instead of an
|
|
||||||
# integer or None.
|
|
||||||
if maxsize is not None and not isinstance(maxsize, int):
|
|
||||||
raise TypeError('Expected maxsize to be an integer or None')
|
|
||||||
|
|
||||||
def decorating_function(user_function):
|
|
||||||
wrapper = _lru_cache_wrapper(user_function, maxsize, typed)
|
|
||||||
return update_wrapper(wrapper, user_function)
|
|
||||||
|
|
||||||
return decorating_function
|
|
||||||
|
|
||||||
def _lru_cache_wrapper(user_function, maxsize, typed):
|
|
||||||
# Constants shared by all lru cache instances:
|
|
||||||
sentinel = object() # unique object used to signal cache misses
|
|
||||||
make_key = _make_key # build a key from the function arguments
|
|
||||||
PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 # names for the link fields
|
|
||||||
|
|
||||||
cache = {}
|
|
||||||
cache_get = cache.get # bound method to lookup a key or return None
|
|
||||||
lock = RLock() # because linkedlist updates aren't threadsafe
|
|
||||||
root = [] # root of the circular doubly linked list
|
|
||||||
root[:] = [root, root, None, None] # initialize by pointing to self
|
|
||||||
hits_misses_full_root = [0, 0, False, root]
|
|
||||||
HITS,MISSES,FULL,ROOT = 0, 1, 2, 3
|
|
||||||
|
|
||||||
if maxsize == 0:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# No caching -- just a statistics update after a successful call
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
hits_misses_full_root[MISSES] += 1
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif maxsize is None:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# Simple caching without ordering or size limit
|
|
||||||
key = make_key(args, kwds, typed)
|
|
||||||
result = cache_get(key, sentinel)
|
|
||||||
if result is not sentinel:
|
|
||||||
hits_misses_full_root[HITS] += 1
|
|
||||||
return result
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
cache[key] = result
|
|
||||||
hits_misses_full_root[MISSES] += 1
|
|
||||||
return result
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def wrapper(*args, **kwds):
|
|
||||||
# Size limited caching that tracks accesses by recency
|
|
||||||
key = make_key(args, kwds, typed)
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
link = cache_get(key)
|
|
||||||
if link is not None:
|
|
||||||
# Move the link to the front of the circular queue
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
link_prev, link_next, _key, result = link
|
|
||||||
link_prev[NEXT] = link_next
|
|
||||||
link_next[PREV] = link_prev
|
|
||||||
last = root[PREV]
|
|
||||||
last[NEXT] = root[PREV] = link
|
|
||||||
link[PREV] = last
|
|
||||||
link[NEXT] = root
|
|
||||||
hits_misses_full_root[HITS] += 1
|
|
||||||
return result
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
result = user_function(*args, **kwds)
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
if key in cache:
|
|
||||||
# Getting here means that this same key was added to the
|
|
||||||
# cache while the lock was released. Since the link
|
|
||||||
# update is already done, we need only return the
|
|
||||||
# computed result and update the count of misses.
|
|
||||||
pass
|
|
||||||
elif hits_misses_full_root[FULL]:
|
|
||||||
# Use the old root to store the new key and result.
|
|
||||||
oldroot = root = hits_misses_full_root[ROOT]
|
|
||||||
oldroot[KEY] = key
|
|
||||||
oldroot[RESULT] = result
|
|
||||||
# Empty the oldest link and make it the new root.
|
|
||||||
# Keep a reference to the old key and old result to
|
|
||||||
# prevent their ref counts from going to zero during the
|
|
||||||
# update. That will prevent potentially arbitrary object
|
|
||||||
# clean-up code (i.e. __del__) from running while we're
|
|
||||||
# still adjusting the links.
|
|
||||||
root = hits_misses_full_root[ROOT] = oldroot[NEXT]
|
|
||||||
oldkey = root[KEY]
|
|
||||||
oldresult = root[RESULT]
|
|
||||||
root[KEY] = root[RESULT] = None
|
|
||||||
# Now update the cache dictionary.
|
|
||||||
del cache[oldkey]
|
|
||||||
# Save the potentially reentrant cache[key] assignment
|
|
||||||
# for last, after the root and links have been put in
|
|
||||||
# a consistent state.
|
|
||||||
cache[key] = oldroot
|
|
||||||
else:
|
|
||||||
# Put result in a new link at the front of the queue.
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
last = root[PREV]
|
|
||||||
link = [last, root, key, result]
|
|
||||||
last[NEXT] = root[PREV] = cache[key] = link
|
|
||||||
# Use the __len__() method instead of the len() function
|
|
||||||
# which could potentially be wrapped in an lru_cache itself.
|
|
||||||
hits_misses_full_root[FULL] = (cache.__len__() >= maxsize)
|
|
||||||
hits_misses_full_root[MISSES]
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def cache_clear():
|
|
||||||
"""Clear the cache and cache statistics"""
|
|
||||||
lock.acquire()
|
|
||||||
try:
|
|
||||||
cache.clear()
|
|
||||||
root = hits_misses_full_root[ROOT]
|
|
||||||
root[:] = [root, root, None, None]
|
|
||||||
hits_misses_full[HITS] = 0
|
|
||||||
hits_misses_full[MISSES] = 0
|
|
||||||
hits_misses_full[FULL] = False
|
|
||||||
finally:
|
|
||||||
lock.release()
|
|
||||||
|
|
||||||
wrapper.cache_clear = cache_clear
|
|
||||||
return wrapper
|
|
@ -0,0 +1,166 @@
|
|||||||
|
# Copyright 2017, David Wilson
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without
|
||||||
|
# modification, are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# 1. Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# 3. Neither the name of the copyright holder nor the names of its contributors
|
||||||
|
# may be used to endorse or promote products derived from this software without
|
||||||
|
# specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||||
|
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||||
|
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||||
|
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||||
|
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||||
|
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||||
|
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||||
|
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||||
|
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||||
|
# POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
|
# !mitogen: minify_safe
|
||||||
|
|
||||||
|
"""mitogen.profiler
|
||||||
|
Record and report cProfile statistics from a run. Creates one aggregated
|
||||||
|
output file, one aggregate containing only workers, and one for the
|
||||||
|
top-level process.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
mitogen.profiler record <dest_path> <tool> [args ..]
|
||||||
|
mitogen.profiler report <dest_path> [sort_mode]
|
||||||
|
mitogen.profiler stat <sort_mode> <tool> [args ..]
|
||||||
|
|
||||||
|
Mode:
|
||||||
|
record: Record a trace.
|
||||||
|
report: Report on a previously recorded trace.
|
||||||
|
stat: Record and report in a single step.
|
||||||
|
|
||||||
|
Where:
|
||||||
|
dest_path: Filesystem prefix to write .pstats files to.
|
||||||
|
sort_mode: Sorting mode; defaults to "cumulative". See:
|
||||||
|
https://docs.python.org/2/library/profile.html#pstats.Stats.sort_stats
|
||||||
|
|
||||||
|
Example:
|
||||||
|
mitogen.profiler record /tmp/mypatch ansible-playbook foo.yml
|
||||||
|
mitogen.profiler dump /tmp/mypatch-worker.pstats
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
import os
|
||||||
|
import pstats
|
||||||
|
import cProfile
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
|
||||||
|
import mitogen.core
|
||||||
|
|
||||||
|
|
||||||
|
def try_merge(stats, path):
|
||||||
|
try:
|
||||||
|
stats.add(path)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print('Failed. Race? Will retry. %s' % (e,))
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def merge_stats(outpath, inpaths):
|
||||||
|
first, rest = inpaths[0], inpaths[1:]
|
||||||
|
for x in range(5):
|
||||||
|
try:
|
||||||
|
stats = pstats.Stats(first)
|
||||||
|
except EOFError:
|
||||||
|
time.sleep(0.2)
|
||||||
|
continue
|
||||||
|
|
||||||
|
print("Writing %r..." % (outpath,))
|
||||||
|
for path in rest:
|
||||||
|
#print("Merging %r into %r.." % (os.path.basename(path), outpath))
|
||||||
|
for x in range(5):
|
||||||
|
if try_merge(stats, path):
|
||||||
|
break
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
stats.dump_stats(outpath)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_stats(outpath, tmpdir):
|
||||||
|
print('Generating stats..')
|
||||||
|
all_paths = []
|
||||||
|
paths_by_ident = {}
|
||||||
|
|
||||||
|
for name in os.listdir(tmpdir):
|
||||||
|
if name.endswith('-dump.pstats'):
|
||||||
|
ident, _, pid = name.partition('-')
|
||||||
|
path = os.path.join(tmpdir, name)
|
||||||
|
all_paths.append(path)
|
||||||
|
paths_by_ident.setdefault(ident, []).append(path)
|
||||||
|
|
||||||
|
merge_stats('%s-all.pstat' % (outpath,), all_paths)
|
||||||
|
for ident, paths in paths_by_ident.items():
|
||||||
|
merge_stats('%s-%s.pstat' % (outpath, ident), paths)
|
||||||
|
|
||||||
|
|
||||||
|
def do_record(tmpdir, path, *args):
|
||||||
|
env = os.environ.copy()
|
||||||
|
fmt = '%(identity)s-%(pid)s.%(now)s-dump.%(ext)s'
|
||||||
|
env['MITOGEN_PROFILING'] = '1'
|
||||||
|
env['MITOGEN_PROFILE_FMT'] = os.path.join(tmpdir, fmt)
|
||||||
|
rc = subprocess.call(args, env=env)
|
||||||
|
generate_stats(path, tmpdir)
|
||||||
|
return rc
|
||||||
|
|
||||||
|
|
||||||
|
def do_report(tmpdir, path, sort='cumulative'):
|
||||||
|
stats = pstats.Stats(path).sort_stats(sort)
|
||||||
|
stats.print_stats(100)
|
||||||
|
|
||||||
|
|
||||||
|
def do_stat(tmpdir, sort, *args):
|
||||||
|
valid_sorts = pstats.Stats.sort_arg_dict_default
|
||||||
|
if sort not in valid_sorts:
|
||||||
|
sys.stderr.write('Invalid sort %r, must be one of %s\n' %
|
||||||
|
(sort, ', '.join(sorted(valid_sorts))))
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
outfile = os.path.join(tmpdir, 'combined')
|
||||||
|
do_record(tmpdir, outfile, *args)
|
||||||
|
aggs = ('app.main', 'mitogen.broker', 'mitogen.child_main',
|
||||||
|
'mitogen.service.pool', 'Strategy', 'WorkerProcess',
|
||||||
|
'all')
|
||||||
|
for agg in aggs:
|
||||||
|
path = '%s-%s.pstat' % (outfile, agg)
|
||||||
|
if os.path.exists(path):
|
||||||
|
print()
|
||||||
|
print()
|
||||||
|
print('------ Aggregation %r ------' % (agg,))
|
||||||
|
print()
|
||||||
|
do_report(tmpdir, path, sort)
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
if len(sys.argv) < 2 or sys.argv[1] not in ('record', 'report', 'stat'):
|
||||||
|
sys.stderr.write(__doc__)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
func = globals()['do_' + sys.argv[1]]
|
||||||
|
tmpdir = tempfile.mkdtemp(prefix='mitogen.profiler')
|
||||||
|
try:
|
||||||
|
sys.exit(func(tmpdir, *sys.argv[2:]) or 0)
|
||||||
|
finally:
|
||||||
|
shutil.rmtree(tmpdir)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
@ -0,0 +1,63 @@
|
|||||||
|
|
||||||
|
import multiprocessing
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
import mock
|
||||||
|
import unittest2
|
||||||
|
import testlib
|
||||||
|
|
||||||
|
import mitogen.parent
|
||||||
|
import ansible_mitogen.affinity
|
||||||
|
|
||||||
|
|
||||||
|
@unittest2.skipIf(
|
||||||
|
reason='Linux/SMP only',
|
||||||
|
condition=(not (
|
||||||
|
os.uname()[0] == 'Linux' and
|
||||||
|
multiprocessing.cpu_count() >= 4
|
||||||
|
))
|
||||||
|
)
|
||||||
|
class LinuxPolicyTest(testlib.TestCase):
|
||||||
|
klass = ansible_mitogen.affinity.LinuxPolicy
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.policy = self.klass()
|
||||||
|
|
||||||
|
def _get_cpus(self, path='/proc/self/status'):
|
||||||
|
fp = open(path)
|
||||||
|
try:
|
||||||
|
for line in fp:
|
||||||
|
if line.startswith('Cpus_allowed'):
|
||||||
|
return int(line.split()[1], 16)
|
||||||
|
finally:
|
||||||
|
fp.close()
|
||||||
|
|
||||||
|
def test_set_clear(self):
|
||||||
|
before = self._get_cpus()
|
||||||
|
self.policy._set_cpu(3)
|
||||||
|
self.assertEquals(self._get_cpus(), 1 << 3)
|
||||||
|
self.policy._clear()
|
||||||
|
self.assertEquals(self._get_cpus(), before)
|
||||||
|
|
||||||
|
def test_clear_on_popen(self):
|
||||||
|
tf = tempfile.NamedTemporaryFile()
|
||||||
|
try:
|
||||||
|
before = self._get_cpus()
|
||||||
|
self.policy._set_cpu(3)
|
||||||
|
my_cpu = self._get_cpus()
|
||||||
|
|
||||||
|
pid = mitogen.parent.detach_popen(
|
||||||
|
args=['cp', '/proc/self/status', tf.name]
|
||||||
|
)
|
||||||
|
os.waitpid(pid, 0)
|
||||||
|
|
||||||
|
his_cpu = self._get_cpus(tf.name)
|
||||||
|
self.assertNotEquals(my_cpu, his_cpu)
|
||||||
|
self.policy._clear()
|
||||||
|
finally:
|
||||||
|
tf.close()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest2.main()
|