[costapp]
ERROR! [pid 25135] 21:10:56.284733 E mitogen.ctx.ssh.35.200.203.48: mitogen: While calling no-reply method PushFileService.forward
Traceback (most recent call last):
File "master:/home/dmw/src/mitogen/mitogen/service.py", line 260, in _invoke
ret = method(**kwargs)
File "master:/home/dmw/src/mitogen/mitogen/service.py", line 718, in forward
self._forward(path, context)
File "master:/home/dmw/src/mitogen/mitogen/service.py", line 633, in _forward
stream = self.router.stream_by_id(context.context_id)
AttributeError: 'unicode' object has no attribute 'context_id'
^C [ERROR]: User interrupted execution
Minify-safe files are marked with a magical "# !mitogen: minify_safe"
comment anywhere in the file, which activates the minifier. The result
is naturally cached by ModuleResponder, therefore lru_cache is gone too.
Given:
import os, mitogen
@mitogen.main()
def main(router):
c = router.ssh(hostname='k3')
c.call(os.getpid)
router.sudo(via=c)
SSH footprint drops from 56.2 KiB to 42.75 KiB (-23.9%)
Ansible "shell: hostname" drops 149.26 KiB to 117.42 KiB (-21.3%)
When the interpreter is modern enough, use zlib.compressobj() to
pre-compress the unchanging parts of the bootstrap once, then use
compressobj.copy() to append just the context's config during stream
construction.
Before: 100 loops, best of 3: 5.81 msec per loop
After: 10000 loops, best of 3: 35.9 usec per loop
With 100 targets this is enough to knock 6 seconds off startup, at 500
targets it becomes half a minute.
Test 'program':
python -m timeit -s '
import mitogen.parent as p;
import mitogen.master as m;
r=m.Router();
s=p.Stream(r, 0, max_message_size=1);
r.broker.shutdown()'\
\
's.get_preamble()'
Ansible modules were being resent continuously - but only the main
script module, and any custom modutils if any were present.
Wire footprint drops by ~1/3rd for a 500 task run of 'shell: hostname':
-rw-r--r-- 1 root root 584K Jan 31 22:06 500mito-before2
-rw-r--r-- 1 root root 434K Jan 31 22:04 500mito-filesbugonly
Single task 100 SSH target run, before:
3533181 function calls (3533083 primitive calls) in 616.688 seconds
User time (seconds): 32.52
System time (seconds): 2.71
Percent of CPU this job got: 64%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:54.88
After:
451602 function calls (451504 primitive calls) in 570.746 seconds
User time (seconds): 29.48
System time (seconds): 2.81
Percent of CPU this job got: 67%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:48.20
This has been broken for some time, but somehow it has become noticeable
on recent Ansible.
loop-100-tasks.yml before:
15.532724001 seconds time elapsed
8.453850000 seconds user
5.808627000 seconds sys
loop-100-tasks.yml after:
8.991635735 seconds time elapsed
5.059232000 seconds user
2.578842000 seconds sys