From e7035220102238c7e823946f3bbd6174dc87b57e Mon Sep 17 00:00:00 2001 From: David Wilson Date: Sun, 3 Feb 2019 23:51:54 +0000 Subject: [PATCH] issue #505: docs: add new detail graph for one scenario. --- docs/ansible.rst | 25 ++++++++++++------- .../pcaps/loop-100-items-local-detail.svg | 1 + tests/bench/linux_record_cpu_net.sh | 18 ++++++++++--- 3 files changed, 32 insertions(+), 12 deletions(-) create mode 100644 docs/images/ansible/pcaps/loop-100-items-local-detail.svg diff --git a/docs/ansible.rst b/docs/ansible.rst index 80b4cb06..e45a9f7a 100644 --- a/docs/ansible.rst +++ b/docs/ansible.rst @@ -1181,7 +1181,7 @@ This demonstrates Mitogen vs. SSH pipelining to the local machine running `_, executing a simple command 100 times. Most Ansible controller overhead is isolated, characterizing just module executor and connection layer performance. -Mitogen requires **63x less bandwidth, 5.9x less time, and 1.5x less CPU**. +Mitogen requires **63x less bandwidth and 5.9x less time**. .. image:: images/ansible/pcaps/loop-100-items-local.svg @@ -1193,6 +1193,14 @@ sent only once. Compression also benefits SSH pipelining, but the presence of large precompressed per-task payloads may present a more significant CPU burden during many-host runs. +.. image:: images/ansible/pcaps/loop-100-items-local-detail.svg + +In a detailed trace, improved interaction with the host machine is visible. In +this playbook because no forks were required to start SSH clients from the +worker process executing the loop, the worker's memory was never marked +read-only, thus avoiding a major hidden performance problem - the page fault +rate is more than halved. + File Transfer: UK to France ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1222,12 +1230,11 @@ target. Mitogen, 98.54, 43.04, "815 KiB", "447 KiB", 3.79 SSH Pipelining, "1,483.54", 329.37, "99,539 KiB", "6,870 KiB", 57.01 -*Roundtrips* represents the approximate number of network roundtrips required -to describe the runtime that was consumed. Due to Mitogen's built-in file -transfer support, continuous reinitialization of an external `scp`/`sftp` -client is avoided, permitting large ``with_filetree`` copies to become -practical without any special casing within the playbook or the Ansible -implementation. +*Roundtrips* is the approximate number of network roundtrips required to +describe the runtime that was consumed. Due to Mitogen's built-in file transfer +support, continuous reinitialization of an external `scp`/`sftp` client is +avoided, permitting large ``with_filetree`` copies to become practical without +any special casing within the playbook or the Ansible implementation. DebOps: UK to India @@ -1245,7 +1252,7 @@ Mitogen's module loading and in-memory caching. By running over a long-distance connection, it highlights behaviour of the connection layer in the presence of high latency. -Mitogen requires **14.5x less bandwidth, 4x less time, and 2.3x less CPU**. +Mitogen requires **14.5x less bandwidth and 4x less time**. .. image:: images/ansible/pcaps/debops-uk-india.svg @@ -1258,6 +1265,6 @@ as previously, with many steps running unavoidably expensive tasks like building C++ code, and compiling static web site assets. Despite the small margin for optimization, Mitogen still manages **6.2x less -bandwidth, 1.8x less time, and 2x less CPU**. +bandwidth and 1.8x less time**. .. image:: images/ansible/pcaps/costapp-uk-india.svg diff --git a/docs/images/ansible/pcaps/loop-100-items-local-detail.svg b/docs/images/ansible/pcaps/loop-100-items-local-detail.svg new file mode 100644 index 00000000..7ae2cf3c --- /dev/null +++ b/docs/images/ansible/pcaps/loop-100-items-local-detail.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/tests/bench/linux_record_cpu_net.sh b/tests/bench/linux_record_cpu_net.sh index bc5c44ee..d125e467 100755 --- a/tests/bench/linux_record_cpu_net.sh +++ b/tests/bench/linux_record_cpu_net.sh @@ -6,7 +6,19 @@ # [ ! "$1" ] && exit 1 -sudo tcpdump -w $1-out.cap -s 0 host k1.botanicus.net & -date +%s.%N > $1-task-clock.csv -perf stat -x, -I 25 -e task-clock --append -o $1-task-clock.csv ansible-playbook run_hostname_100_times.yml +name="$1"; shift + + +sudo tcpdump -i any -w $name-net.pcap -s 66 port 22 or port 9122 & +sleep 0.5 + +perf stat -x, -I 100 \ + -e branches \ + -e instructions \ + -e task-clock \ + -e context-switches \ + -e page-faults \ + -e cpu-migrations \ + -o $name-perf.csv "$@" +pkill -f ssh:; sleep 0.1 sudo pkill -f tcpdump