Commit Graph

3 Commits (e98cdbb8b673a6f3307c903c14cd8b8e72badc60)

Author SHA1 Message Date
Avery Pennarun 5041800ac6 wgengine/tstun/faketun: it's a null tunnel, not a loopback.
At some point faketun got implemented as a loopback (put a packet in
from wireguard, the same packet goes back to wireguard) which is not
useful. It's supposed to be an interface that just sinks all packets,
and then wgengine adds *only* and ICMP Echo responder as a layer on
top.

This caused extremely odd bugs on darwin, where the special case that
reinjects packets from local->local was filling the loopback channel
and creating an infinite loop (which became jammed since the reader and
writer were in the same goroutine).

Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
4 years ago
Dmytro Shynkevych 02231e968e
wgengine/tstun: add tests and benchmarks (#436)
Signed-off-by: Dmytro Shynkevych <dmytro@tailscale.com>
5 years ago
Dmytro Shynkevych 33b2f30cea
wgengine: wrap tun.Device to support filtering and packet injection (#358)
Right now, filtering and packet injection in wgengine depend
on a patch to wireguard-go that probably isn't suitable for upstreaming.

This need not be the case: wireguard-go/tun.Device is an interface.
For example, faketun.go implements it to mock a TUN device for testing.

This patch implements the same interface to provide filtering
and packet injection at the tunnel device level,
at which point the wireguard-go patch should no longer be necessary.

This patch has the following performance impact on i7-7500U @ 2.70GHz,
tested in the following namespace configuration:
┌────────────────┐    ┌─────────────────────────────────┐     ┌────────────────┐
│      $ns1      │    │               $ns0              │     │      $ns2      │
│    client0     │    │      tailcontrol, logcatcher    │     │     client1    │
│  ┌─────┐       │    │  ┌──────┐         ┌──────┐      │     │  ┌─────┐       │
│  │vethc│───────┼────┼──│vethrc│         │vethrs│──────┼─────┼──│veths│       │
│  ├─────┴─────┐ │    │  ├──────┴────┐    ├──────┴────┐ │     │  ├─────┴─────┐ │
│  │10.0.0.2/24│ │    │  │10.0.0.1/24│    │10.0.1.1/24│ │     │  │10.0.1.2/24│ │
│  └───────────┘ │    │  └───────────┘    └───────────┘ │     │  └───────────┘ │
└────────────────┘    └─────────────────────────────────┘     └────────────────┘
Before:
---------------------------------------------------
| TCP send               | UDP send               |
|------------------------|------------------------|
| 557.0 (±8.5) Mbits/sec | 3.03 (±0.02) Gbits/sec |
---------------------------------------------------
After:
---------------------------------------------------
| TCP send               | UDP send               |
|------------------------|------------------------|
| 544.8 (±1.6) Mbits/sec | 3.13 (±0.02) Gbits/sec |
---------------------------------------------------
The impact on receive performance is similar.

Signed-off-by: Dmytro Shynkevych <dmytro@tailscale.com>
5 years ago