Compare commits

...

38 Commits

Author SHA1 Message Date
Flakes Updater 85e7d7b62c go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
3 weeks ago
Andrew Lytvynov ce5c80d0fe
clientupdate: exec systemctl instead of using dbus to restart (#11923)
Shell out to "systemctl", which lets us drop an extra dependency.

Updates https://github.com/tailscale/corp/issues/18935

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Fran Bull 6a0fbacc28 appc: setting AdvertiseRoutes explicitly discards app connector routes
This fixes bugs where after using the cli to set AdvertiseRoutes users
were finding that they had to restart tailscaled before the app
connector would advertise previously learned routes again. And seems
more in line with user expectations.

Fixes #11006
Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Fran Bull c27dc1ca31 appc: unadvertise routes when reconfiguring app connector
If the controlknob to persist app connector routes is enabled, when
reconfiguring an app connector unadvertise routes that are no longer
relevant.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Fran Bull fea2e73bc1 appc: write discovered domains to StateStore
If the controlknob is on.
This will allow us to remove discovered routes associated with a
particular domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Fran Bull 1bd1b387b2 appc: add flag shouldStoreRoutes and controlknob for it
When an app connector is reconfigured and domains to route are removed,
we would like to no longer advertise routes that were discovered for
those domains. In order to do this we plan to store which routes were
discovered for which domains.

Add a controlknob so that we can enable/disable the new behavior.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Fran Bull 79836e7bfd appc: add RouteInfo struct and persist it to StateStore
Lays the groundwork for the ability to persist app connectors discovered
routes, which will allow us to stop advertising routes for a domain if
the app connector no longer monitors that domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Andrew Dunham b2b49cb3d5 wgengine/wgcfg/nmcfg: skip expired peers
Updates tailscale/corp#19315

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1ad0c8796efe3dd456280e51efaf81f6d2049772
3 weeks ago
Mario Minardi 74c399483c
api.md: explicitly set content-type headers in POST CURL examples (#11916)
Explicitly set `-H "Content-Type: application/json"` in CURL examples
for POST endpoints as the default content type used by CURL is otherwise
`application/x-www-form-urlencoded` and these endpoints expect JSON data.

Updates https://github.com/tailscale/tailscale/issues/11914

Signed-off-by: Mario Minardi <mario@tailscale.com>
3 weeks ago
Irbe Krumina 1452faf510
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login on kube, check Secret create perms, allow empty state Secret (#11326)
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms

* Allow users to pre-create empty state Secrets

* Add a fake internal kube client, test functionality that has dependencies on kube client operations.

* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist

* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed

Updates tailscale/tailscale#11170

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
Kristoffer Dalby 1e6cdb7d86 api.md: fix missing links after move of device posture
Updates tailscale/corp#18572

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
3 weeks ago
Brad Fitzpatrick b9adbe2002 net/{interfaces,netmon}, all: merge net/interfaces package into net/netmon
In prep for most of the package funcs in net/interfaces to become
methods in a long-lived netmon.Monitor that can cache things.  (Many
of the funcs are very heavy to call regularly, whereas the long-lived
netmon.Monitor can subscribe to things from the OS and remember
answers to questions it's asked regularly later)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: Ie4e8dedb70136af2d611b990b865a822cd1797e5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 6b95219e3a net/netmon, add: add netmon.State type alias of interfaces.State
... in prep for merging the net/interfaces package into net/netmon.

This is a no-op change that updates a bunch of the API signatures ahead of
a future change to actually move things (and remove the type alias)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I477613388f09389214db0d77ccf24a65bff2199c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Irbe Krumina 45f0721530
cmd/containerboot: wait on tailscaled process only (#11897)
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.

Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.

Updates tailscale/tailscale#11593

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
Brad Fitzpatrick 3672f29a4e net/netns, net/dns/resolver, etc: make netmon required in most places
The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached. But first (this change and others)
we need to make sure the one netmon.Monitor is plumbed everywhere.

Some notable bits:

* tsdial.NewDialer is added, taking a now-required netmon

* because a tsdial.Dialer always has a netmon, anything taking both
  a Dialer and a NetMon is now redundant; take only the Dialer and
  get the NetMon from that if/when needed.

* netmon.NewStatic is added, primarily for tests

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I877f9cb87618c4eb037cee098241d18da9c01691
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4f73a26ea5 ipn/ipnlocal: skip TestOnTailnetDefaultAutoUpdate on macOS for now
While it's broken.

Updates #11894

Change-Id: I24698707ffe405471a14ab2683aea7e836531da8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 7a62dddeac net/netcheck, wgengine/magicsock: make netmon.Monitor required
This has been a TODO for ages. Time to do it.

The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I60fc6508cd2d8d079260bda371fc08b6318bcaf1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4dece0c359 net/netutil: remove a use of deprecated interfaces.GetState
I'm working on moving all network state queries to be on
netmon.Monitor, removing old APIs.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: If0de137e0e2e145520f69e258597fb89cf39a2a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 7f587d0321 health, wgengine/magicsock: remove last of health package globals
Fixes #11874
Updates #4136

Change-Id: Ib70e6831d4c19c32509fe3d7eee4aa0e9f233564
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Jonathan Nobels 71e9258ad9
ipn/ipnlocal: fix null dereference for early suggested exit node queries (#11885)
Fixes tailscale/corp#19558

A request for the suggested exit nodes that occurs too early in the
VPN lifecycle would result in a null deref of the netmap and/or
the netcheck report.  This checks both and errors out.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
3 weeks ago
Brad Fitzpatrick 745931415c health, all: remove health.Global, finish plumbing health.Tracker
Updates #11874
Updates #4136

Change-Id: I414470f71d90be9889d44c3afd53956d9f26cd61
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick a4a282cd49 control/controlclient: plumb health.Tracker
Updates #11874
Updates #4136

Change-Id: Ia941153bd83523f0c8b56852010f5231d774d91a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick 6d69fc137f ipn/{ipnlocal,localapi},wgengine{,/magicsock}: plumb health.Tracker
Down to 25 health.Global users. After this remains controlclient &
net/dns & wgengine/router.

Updates #11874
Updates #4136

Change-Id: I6dd1856e3d9bf523bdd44b60fb3b8f7501d5dc0d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Irbe Krumina df8f40905b
cmd/k8s-operator,k8s-operator: optionally serve tailscaled metrics on Pod IP (#11699)
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).

The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.

Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
4 weeks ago
Brad Fitzpatrick 723c775dbb tsd, ipnlocal, etc: add tsd.System.HealthTracker, start some plumbing
This adds a health.Tracker to tsd.System, accessible via
a new tsd.System.HealthTracker method.

In the future, that new method will return a tsd.System-specific
HealthTracker, so multiple tsnet.Servers in the same process are
isolated. For now, though, it just always returns the temporary
health.Global value. That permits incremental plumbing over a number
of changes. When the second to last health.Global reference is gone,
then the tsd.System.HealthTracker implementation can return a private
Tracker.

The primary plumbing this does is adding it to LocalBackend and its
dozen and change health calls. A few misc other callers are also
plumbed. Subsequent changes will flesh out other parts of the tree
(magicsock, controlclient, etc).

Updates #11874
Updates #4136

Change-Id: Id51e73cfc8a39110425b6dc19d18b3975eac75ce
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick cb66952a0d health: permit Tracker method calls on nil receiver
In prep for tsd.System Tracker plumbing throughout tailscaled,
defensively permit all methods on Tracker to accept a nil receiver
without crashing, lest I screw something up later. (A health tracking
system that itself causes crashes would be no good.) Methods on nil
receivers should not be called, so a future change will also collect
their stacks (and panic during dev/test), but we should at least not
crash in prod.

This also locks that in with a test using reflect to automatically
call all methods on a nil receiver and check they don't crash.

Updates #11874
Updates #4136

Change-Id: I8e955046ebf370ec8af0c1fb63e5123e6282a9d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Chris Palmer 7349b274bd
safeweb: handle mux pattern collisions more generally (#11801)
Fixes #11800

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
4 weeks ago
Brad Fitzpatrick 5b32264033 health: break Warnable into a global and per-Tracker value halves
Previously it was both metadata about the class of warnable item as
well as the value.

Now it's only metadata and the value is per-Tracker.

Updates #11874
Updates #4136

Change-Id: Ia1ed1b6c95d34bc5aae36cffdb04279e6ba77015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick ebc552d2e0 health: add Tracker type, in prep for removing global variables
This moves most of the health package global variables to a new
`health.Tracker` type.

But then rather than plumbing the Tracker in tsd.System everywhere,
this only goes halfway and makes one new global Tracker
(`health.Global`) that all the existing callers now use.

A future change will eliminate that global.

Updates #11874
Updates #4136

Change-Id: I6ee27e0b2e35f68cb38fecdb3b2dc4c3f2e09d68
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Claire Wang d5fc52a0f5
tailcfg: add auto exit node attribute (#11871)
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
4 weeks ago
Sonia Appasamy 18765cd4f9 release/dist/qnap: omit .qpkg.codesigning files
Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
4 weeks ago
Percy Wegmann 955ad12489 ipn/ipnlocal: only show Taildrive peers to which ACLs grant us access
This improves convenience and security.

* Convenience - no need to see nodes that can't share anything with you.
* Security - malicious nodes can't expose shares to peers that aren't
             allowed to access their shares.

Updates tailscale/corp#19432

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago
Sonia Appasamy 5d4b4ffc3c release/dist/qnap: update perms for tmpDir files
Allows all users to read all files, and .sh/.cgi files to be
executable.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
4 weeks ago
Lee Briggs 14ac41febc
cmd/k8s-operator,k8s-operator: proxyclass affinity (#11862)
add ability to set affinity rules to proxyclass

Updates#11861

Signed-off-by: Lee Briggs <lee@leebriggs.co.uk>
4 weeks ago
Anton Tolchanov 31e6bdbc82 ipn/ipnlocal: always stop the engine on auth when key has expired
If seamless key renewal is enabled, we typically do not stop the engine
(deconfigure networking). However, if the node key has expired there is
no point in keeping the connection up, and it might actually prevent
key renewal if auth relies on endpoints routed via app connectors.

Fixes tailscale/corp#5800

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
4 weeks ago
Andrea Gottardo 1d3e77f373
util/syspolicy: add ReadStringArray interface (#11857)
Fixes tailscale/corp#19459

This PR adds the ability for users of the syspolicy handler to read string arrays from the MDM solution configured on the system.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
4 weeks ago
Sonia Appasamy 0cce456ee5 release/dist/qnap: use tmp file directory for qpkg building
This change allows for the release/dist/qnap package to be used
outside of the tailscale repo (notably, will be used from corp),
by using an embedded file system for build files which gets
temporarily written to a new folder during qnap build runs.

Without this change, when used from corp, the release/dist/qnap
folder will fail to be found within the corp repo, causing
various steps of the build to fail.

The file renames in this change are to combine the build files
into a /files folder, separated into /scripts and /Tailscale.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
4 weeks ago
Percy Wegmann c8e912896e wgengine/router: consolidate routes before reconfiguring router for mobile clients
This helps reduce memory pressure on tailnets with large numbers
of routes.

Updates tailscale/corp#19332

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago

@ -356,7 +356,7 @@ jobs:
# some Android breakages early.
# TODO(bradfitz): better; see https://github.com/tailscale/tailscale/issues/4482
- name: build some
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/interfaces ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/netmon ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
env:
GOOS: android
GOARCH: arm64

@ -428,7 +428,8 @@ The ID of the device.
```sh
curl -X POST 'https://api.tailscale.com/api/v2/device/12345/expire' \
-u "tskey-api-xxxxx:"
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json"
```
### Response
@ -508,6 +509,7 @@ The new list of enabled subnet routes.
```sh
curl "https://api.tailscale.com/api/v2/device/11055/routes" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"routes": ["10.0.0.0/16", "192.168.1.0/24"]}'
```
@ -556,6 +558,7 @@ Specify whether the device is authorized. False to deauthorize an authorized dev
```sh
curl "https://api.tailscale.com/api/v2/device/11055/authorized" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"authorized": true}'
```
@ -605,6 +608,7 @@ The new list of tags for the device.
```sh
curl "https://api.tailscale.com/api/v2/device/11055/tags" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"tags": ["tag:foo", "tag:bar"]}'
```
@ -667,6 +671,7 @@ This returns a 2xx code on success, with an empty JSON object in the response bo
```sh
curl "https://api.tailscale.com/api/v2/device/11055/key" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"keyExpiryDisabled": true}'
```
@ -712,6 +717,7 @@ This returns a 2xx code on success, with an empty JSON object in the response bo
```sh
curl "https://api.tailscale.com/api/v2/device/11055/ip" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"ipv4": "100.80.0.1"}'
```
@ -722,8 +728,8 @@ The response is 2xx on success. The response body is currently an empty JSON obj
## Get device posture attributes
The posture attributes API endpoints can be called with OAuth access tokens with
an `acl` or `devices` [scope][oauth-scopes], or personal access belonging to
[user roles][user-roles] Owners, Admins, Network Admins, or IT Admins.
an `acl` or `devices` [scope](https://tailscale.com/kb/1215/oauth-clients#scopes), or personal access belonging to
[user roles](https://tailscale.com/kb/1138/user-roles) Owners, Admins, Network Admins, or IT Admins.
```
GET /api/v2/device/{deviceID}/attributes
@ -808,6 +814,7 @@ A number value is an integer and must be a JSON safe number (up to 2^53 - 1).
```
curl "https://api.tailscale.com/api/v2/device/11055/attributes/custom:my_attribute" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"value": "my_value"}'
```
@ -918,7 +925,7 @@ The response will contain a JSON object with the fields:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-u "tskey-api-xxxxx:"
```
### Response in HuJSON format
@ -958,8 +965,8 @@ Etag: "e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c"
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-H "Accept: application/json" \
-u "tskey-api-xxxxx:"
-H "Accept: application/json"
```
### Response in JSON format
@ -1000,7 +1007,7 @@ Etag: "e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c"
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl?details=1" \
-u "tskey-api-xxxxx:" \
-u "tskey-api-xxxxx:"
```
### Response (with details)
@ -1072,6 +1079,7 @@ Learn about the [ACL policy properties you can include in the request](https://t
POST /api/v2/tailnet/example.com/acl
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
-H "If-Match: \"e0b2816b418b3f266309d94426ac7668ab3c1fa87798785bf82f1085cc2f6d9c\""
--data-binary '// Example/default ACLs for unrestricted connections.
{
@ -1184,6 +1192,7 @@ Learn about [tailnet policy file entries](https://tailscale.com/kb/1018).
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/preview?previewFor=user1@example.com&type=user" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '// Example/default ACLs for unrestricted connections.
{
// Declare tests to check functionality of ACL rules. User must be a valid user with registered machines.
@ -1267,6 +1276,7 @@ Learn more about [tailnet policy file tests](https://tailscale.com/kb/1018/#test
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/validate" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
[
{"src": "user1@example.com", "accept": ["example-host-1:22"], "deny": ["example-host-2:100"]}
@ -1288,6 +1298,7 @@ The `POST` body should be a JSON object with a JSON or HuJSON representation of
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/acl/validate" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
{
"acls": [
@ -1548,6 +1559,7 @@ Note the following about required vs. optional values:
```jsonc
curl "https://api.tailscale.com/api/v2/tailnet/example.com/keys" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '
{
"capabilities": {
@ -1759,6 +1771,7 @@ Adding DNS nameservers with the MagicDNS on:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/nameservers" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"dns": ["8.8.8.8"]}'
```
@ -1778,6 +1791,7 @@ The response is a JSON object containing the new list of nameservers and the sta
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/nameservers" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"dns": []}'
```
@ -1864,6 +1878,7 @@ The DNS preferences in JSON. Currently, MagicDNS is the only setting available:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/preferences" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"magicDNS": true}'
```
@ -1947,6 +1962,7 @@ Specify a list of search paths in a JSON object:
```sh
curl "https://api.tailscale.com/api/v2/tailnet/example.com/dns/searchpaths" \
-u "tskey-api-xxxxx:" \
-H "Content-Type: application/json" \
--data-binary '{"searchPaths": ["user1.example.com", "user2.example.com"]}'
```

@ -23,6 +23,7 @@ import (
"tailscale.com/util/dnsname"
"tailscale.com/util/execqueue"
"tailscale.com/util/mak"
"tailscale.com/util/slicesx"
)
// RouteAdvertiser is an interface that allows the AppConnector to advertise
@ -36,6 +37,19 @@ type RouteAdvertiser interface {
UnadvertiseRoute(...netip.Prefix) error
}
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs
// its function as a subsystem inside of a tailscale node. At the control plane
// side App Connector routing is configured in terms of domains rather than IP
@ -49,6 +63,9 @@ type AppConnector struct {
logf logger.Logf
routeAdvertiser RouteAdvertiser
// storeRoutesFunc will be called to persist routes if it is not nil.
storeRoutesFunc func(*RouteInfo) error
// mu guards the fields that follow
mu sync.Mutex
@ -67,11 +84,46 @@ type AppConnector struct {
}
// NewAppConnector creates a new AppConnector.
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser) *AppConnector {
return &AppConnector{
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
ac := &AppConnector{
logf: logger.WithPrefix(logf, "appc: "),
routeAdvertiser: routeAdvertiser,
storeRoutesFunc: storeRoutesFunc,
}
if routeInfo != nil {
ac.domains = routeInfo.Domains
ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = routeInfo.Control
}
return ac
}
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() error {
if !e.ShouldStoreRoutes() {
return nil
}
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
}
// ClearRoutes removes all route state from the AppConnector.
func (e *AppConnector) ClearRoutes() error {
e.mu.Lock()
defer e.mu.Unlock()
e.controlRoutes = nil
e.domains = nil
e.wildcards = nil
return e.storeRoutesLocked()
}
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration
@ -125,10 +177,26 @@ func (e *AppConnector) updateDomains(domains []string) {
for _, wc := range e.wildcards {
if dnsname.HasSuffix(d, wc) {
e.domains[d] = addrs
delete(oldDomains, d)
break
}
}
}
// Everything left in oldDomains is a domain we're no longer tracking
// and if we are storing route info we can unadvertise the routes
if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{}
for _, addrs := range oldDomains {
for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
}
@ -152,6 +220,14 @@ func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
var toRemove []netip.Prefix
// If we're storing routes and know e.controlRoutes is a good
// representation of what should be in AdvertisedRoutes we can stop
// advertising routes that used to be in e.controlRoutes but are not
// in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes)
}
nextRoute:
for _, r := range routes {
for _, addr := range e.domains {
@ -170,6 +246,9 @@ nextRoute:
}
e.controlRoutes = routes
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}
// Domains returns the currently configured domain list.
@ -380,6 +459,9 @@ func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Pref
e.logf("[v2] advertised route for %v: %v", domain, addr)
}
}
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
})
}
@ -400,3 +482,15 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
func compareAddr(l, r netip.Addr) int {
return l.Compare(r)
}
// routesWithout returns a without b where a and b
// are unsorted slices of netip.Prefix
func routesWithout(a, b []netip.Prefix) []netip.Prefix {
m := make(map[netip.Prefix]bool, len(b))
for _, p := range b {
m[p] = true
}
return slicesx.Filter(make([]netip.Prefix, 0, len(a)), a, func(p netip.Prefix) bool {
return !m[p]
})
}

@ -17,194 +17,238 @@ import (
"tailscale.com/util/must"
)
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) {
ctx := context.Background()
a := NewAppConnector(t.Logf, nil)
a.UpdateDomains([]string{"example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
}
}
func TestUpdateRoutes(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"*.example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
}
}
}
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
}
}
}
func TestDomainRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
}
}
}
func TestObserveDNSResponse(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
}
}
}
func TestWildcardDomains(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
}
}
}
@ -310,3 +354,169 @@ func prefixCompare(a, b netip.Prefix) int {
}
return a.Addr().Compare(b.Addr())
}
func prefixes(in ...string) []netip.Prefix {
toRet := make([]netip.Prefix, len(in))
for i, s := range in {
toRet[i] = netip.MustParsePrefix(s)
}
return toRet
}
func TestUpdateRouteRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.2/32"))
a.Wait(ctx)
// the routes passed to UpdateDomainsAndRoutes have been advertised
assertRoutes("simple update", prefixes("1.2.3.1/32", "1.2.3.2/32"), []netip.Prefix{})
// one route the same, one different
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.3/32"))
a.Wait(ctx)
// old behavior: routes are not removed, resulting routes are both old and new
// (we have dupe 1.2.3.1 routes because the test RouteAdvertiser doesn't have the deduplication
// the real one does)
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior: routes are removed, resulting routes are new only
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes = prefixes("1.2.3.2/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateDomainRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateWildcardRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for *.b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestRoutesWithout(t *testing.T) {
assert := func(msg string, got, want []netip.Prefix) {
if !slices.Equal(want, got) {
t.Errorf("%s: want %v, got %v", msg, want, got)
}
}
assert("empty routes", routesWithout([]netip.Prefix{}, []netip.Prefix{}), []netip.Prefix{})
assert("a empty", routesWithout([]netip.Prefix{}, prefixes("1.1.1.1/32", "1.1.1.2/32")), []netip.Prefix{})
assert("b empty", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), []netip.Prefix{}), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("no overlap", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.3/32", "1.1.1.4/32")), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("a has fewer", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32")), []netip.Prefix{})
assert("a has more", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32"), prefixes("1.1.1.1/32", "1.1.1.3/32")), prefixes("1.1.1.2/32", "1.1.1.4/32"))
}

@ -1019,6 +1019,20 @@ func (up *Updater) updateLinuxBinary() error {
return nil
}
func restartSystemdUnit(ctx context.Context) error {
if _, err := exec.LookPath("systemctl"); err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
if out, err := exec.Command("systemctl", "daemon-reload").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl daemon-reload failed: %w\noutput: %s", err, out)
}
if out, err := exec.Command("systemctl", "restart", "tailscaled.service").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl restart failed: %w\noutput: %s", err, out)
}
return nil
}
func (up *Updater) downloadLinuxTarball(ver string) (string, error) {
dlDir, err := os.UserCacheDir()
if err != nil {

@ -1,37 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package clientupdate
import (
"context"
"errors"
"fmt"
"github.com/coreos/go-systemd/v22/dbus"
)
func restartSystemdUnit(ctx context.Context) error {
c, err := dbus.NewWithContext(ctx)
if err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
defer c.Close()
if err := c.ReloadContext(ctx); err != nil {
return fmt.Errorf("failed to reload tailscaled.service: %w", err)
}
ch := make(chan string, 1)
if _, err := c.RestartUnitContext(ctx, "tailscaled.service", "replace", ch); err != nil {
return fmt.Errorf("failed to restart tailscaled.service: %w", err)
}
select {
case res := <-ch:
if res != "done" {
return fmt.Errorf("systemd service restart failed with result %q", res)
}
case <-ctx.Done():
return ctx.Err()
}
return nil
}

@ -1,15 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !linux
package clientupdate
import (
"context"
"errors"
)
func restartSystemdUnit(ctx context.Context) error {
return errors.ErrUnsupported
}

@ -8,6 +8,7 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/http"
@ -18,20 +19,6 @@ import (
"tailscale.com/tailcfg"
)
// findKeyInKubeSecret inspects the kube secret secretName for a data
// field called "authkey", and returns its value if present.
func findKeyInKubeSecret(ctx context.Context, secretName string) (string, error) {
s, err := kc.GetSecret(ctx, secretName)
if err != nil {
return "", err
}
ak, ok := s.Data["authkey"]
if !ok {
return "", nil
}
return string(ak), nil
}
// storeDeviceInfo writes deviceID into the "device_id" data field of the kube
// secret secretName.
func storeDeviceInfo(ctx context.Context, secretName string, deviceID tailcfg.StableNodeID, fqdn string, addresses []netip.Prefix) error {
@ -88,9 +75,59 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
return nil
}
var kc *kube.Client
var kc kube.Client
// setupKube is responsible for doing any necessary configuration and checks to
// ensure that tailscale state storage and authentication mechanism will work on
// Kubernetes.
func (cfg *settings) setupKube(ctx context.Context) error {
if cfg.KubeSecret == "" {
return nil
}
canPatch, canCreate, err := kc.CheckSecretPermissions(ctx, cfg.KubeSecret)
if err != nil {
return fmt.Errorf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
s, err := kc.GetSecret(ctx, cfg.KubeSecret)
if err != nil && kube.IsNotFoundErr(err) && !canCreate {
return fmt.Errorf("Tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
} else if err != nil && !kube.IsNotFoundErr(err) {
return fmt.Errorf("Getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
if s == nil {
log.Print("TS_AUTHKEY not provided and kube secret does not exist, login will be interactive if needed.")
return nil
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
}
return nil
}
func initKube(root string) {
func initKubeClient(root string) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.

@ -0,0 +1,206 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"errors"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/kube"
)
func TestSetupKube(t *testing.T) {
tests := []struct {
name string
cfg *settings
wantErr bool
wantCfg *settings
kc kube.Client
}{
{
name: "TS_AUTHKEY set, state Secret exists",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, nil
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we do not have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to retrieve the state Secret",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 403}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to check Secret permissions",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, errors.New("broken")
},
},
wantErr: true,
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret exists, but does not contain auth key",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{}, nil
},
},
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we do not have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return true, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
AuthKey: "foo",
KubernetesCanPatch: true,
},
},
}
for _, tt := range tests {
kc = tt.kc
t.Run(tt.name, func(t *testing.T) {
if err := tt.cfg.setupKube(context.Background()); (err != nil) != tt.wantErr {
t.Errorf("settings.setupKube() error = %v, wantErr %v", err, tt.wantErr)
}
if diff := cmp.Diff(*tt.cfg, *tt.wantCfg); diff != "" {
t.Errorf("unexpected contents of settings after running settings.setupKube()\n(-got +want):\n%s", diff)
}
})
}
}

@ -171,44 +171,16 @@ func main() {
}
}
if cfg.InKubernetes {
initKube(cfg.Root)
}
// Context is used for all setup stuff until we're in steady
// state, so that if something is hanging we eventually time out
// and crashloop the container.
bootCtx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if cfg.InKubernetes && cfg.KubeSecret != "" {
canPatch, err := kc.CheckSecretPermissions(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
key, err := findKeyInKubeSecret(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Getting authkey from kube secret: %v", err)
}
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
log.Fatalf("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
log.Print("Using authkey found in kube secret")
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
if cfg.InKubernetes {
initKubeClient(cfg.Root)
if err := cfg.setupKube(bootCtx); err != nil {
log.Fatalf("error setting up for running on Kubernetes: %v", err)
}
}
@ -559,25 +531,26 @@ runLoop:
log.Println("Startup complete, waiting for shutdown signal")
startupTasksDone = true
// Reap all processes, since we are PID1 and need to collect zombies. We can
// only start doing this once we've stopped shelling out to things
// `tailscale up`, otherwise this goroutine can reap the CLI subprocesses
// and wedge bringup.
// Wait on tailscaled process. It won't
// be cleaned up by default when the
// container exits as it is not PID1.
// TODO (irbekrm): perhaps we can
// replace the reaper by a running
// cmd.Wait in a goroutine immediately
// after starting tailscaled?
reaper := func() {
defer wg.Done()
for {
var status unix.WaitStatus
pid, err := unix.Wait4(-1, &status, 0, nil)
_, err := unix.Wait4(daemonProcess.Pid, &status, 0, nil)
if errors.Is(err, unix.EINTR) {
continue
}
if err != nil {
log.Fatalf("Waiting for exited processes: %v", err)
}
if pid == daemonProcess.Pid {
log.Printf("Tailscaled exited")
os.Exit(0)
log.Fatalf("Waiting for tailscaled to exit: %v", err)
}
log.Print("tailscaled exited")
os.Exit(0)
}
}
wg.Add(1)

@ -20,7 +20,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/google/uuid from tailscale.com/util/fastuuid
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
L 💣 github.com/mdlayher/netlink from github.com/google/nftables+
L 💣 github.com/mdlayher/netlink/nlenc from github.com/jsimonetti/rtnetlink+
@ -47,7 +47,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/x448/float16 from github.com/fxamacker/cbor/v2
💣 go4.org/mem from tailscale.com/client/tailscale+
go4.org/netipx from tailscale.com/net/tsaddr+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/interfaces+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/netmon+
google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt
google.golang.org/protobuf/encoding/prototext from github.com/prometheus/common/expfmt+
google.golang.org/protobuf/encoding/protowire from google.golang.org/protobuf/encoding/protodelim+
@ -89,18 +89,17 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial
tailscale.com/hostinfo from tailscale.com/net/interfaces+
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/hostinfo from tailscale.com/net/netmon+
tailscale.com/ipn from tailscale.com/client/tailscale
tailscale.com/ipn/ipnstate from tailscale.com/client/tailscale+
tailscale.com/metrics from tailscale.com/cmd/derper+
tailscale.com/net/dnscache from tailscale.com/derp/derphttp
tailscale.com/net/flowtrack from tailscale.com/net/packet+
💣 tailscale.com/net/interfaces from tailscale.com/net/netmon+
tailscale.com/net/ktimeout from tailscale.com/cmd/derper
tailscale.com/net/netaddr from tailscale.com/ipn+
tailscale.com/net/netknob from tailscale.com/net/netns
tailscale.com/net/netmon from tailscale.com/derp/derphttp+
💣 tailscale.com/net/netmon from tailscale.com/derp/derphttp+
tailscale.com/net/netns from tailscale.com/derp/derphttp
tailscale.com/net/netutil from tailscale.com/client/tailscale
tailscale.com/net/packet from tailscale.com/wgengine/filter
@ -117,7 +116,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/syncs from tailscale.com/cmd/derper+
tailscale.com/tailcfg from tailscale.com/client/tailscale+
tailscale.com/tka from tailscale.com/client/tailscale+
W tailscale.com/tsconst from tailscale.com/net/interfaces
W tailscale.com/tsconst from tailscale.com/net/netmon
tailscale.com/tstime from tailscale.com/derp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/derp+
@ -138,6 +137,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/ipn+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netmon+
tailscale.com/util/cloudenv from tailscale.com/hostinfo+
W tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy
@ -148,7 +148,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/util/httpm from tailscale.com/client/tailscale
tailscale.com/util/lineread from tailscale.com/hostinfo+
L tailscale.com/util/linuxfw from tailscale.com/net/netns
tailscale.com/util/mak from tailscale.com/net/interfaces+
tailscale.com/util/mak from tailscale.com/health+
tailscale.com/util/multierr from tailscale.com/health+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/set from tailscale.com/derp+

@ -15,6 +15,7 @@ import (
"tailscale.com/derp"
"tailscale.com/derp/derphttp"
"tailscale.com/net/netmon"
"tailscale.com/types/key"
"tailscale.com/types/logger"
)
@ -36,7 +37,8 @@ func startMesh(s *derp.Server) error {
func startMeshWithHost(s *derp.Server, host string) error {
logf := logger.WithPrefix(log.Printf, fmt.Sprintf("mesh(%q): ", host))
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf)
netMon := netmon.NewStatic() // good enough for cmd/derper; no need for netns fanciness
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf, netMon)
if err != nil {
return err
}

@ -37,9 +37,16 @@ spec:
spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
type: object
required:
- statefulSet
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
type: object
required:
- enable
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
statefulSet:
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
type: object
@ -58,6 +65,526 @@ spec:
description: Configuration for the proxy Pod.
type: object
properties:
affinity:
description: Proxy Pod's affinity rules. By default, the Tailscale Kubernetes operator does not apply any affinity rules. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#affinity
type: object
properties:
nodeAffinity:
description: Describes node affinity scheduling rules for the pod.
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
type: array
items:
description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
type: object
required:
- preference
- weight
properties:
preference:
description: A node selector term, associated with the corresponding weight.
type: object
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchFields:
description: A list of node selector requirements by node's fields.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
type: object
required:
- nodeSelectorTerms
properties:
nodeSelectorTerms:
description: Required. A list of node selector terms. The terms are ORed.
type: array
items:
description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
type: object
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchFields:
description: A list of node selector requirements by node's fields.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
x-kubernetes-map-type: atomic
x-kubernetes-map-type: atomic
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
type: array
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
type: object
required:
- podAffinityTerm
- weight
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
type: array
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
podAntiAffinity:
description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
type: array
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
type: object
required:
- podAffinityTerm
- weight
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
type: array
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
annotations:
description: Annotations that will be added to the proxy Pod. Any annotations specified here will be merged with the default annotations applied to the Pod by the Tailscale Kubernetes operator. Annotations must be valid Kubernetes annotations. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set
type: object

@ -3,13 +3,15 @@ kind: ProxyClass
metadata:
name: prod
spec:
metrics:
enable: true
statefulSet:
annotations:
platform-component: infra
platform-component: infra
pod:
labels:
team: eng
nodeSelector:
beta.kubernetes.io/os: "linux"
kubernetes.io/os: "linux"
imagePullSecrets:
- name: "foo"

@ -193,6 +193,15 @@ spec:
spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation.
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
required:
- enable
type: object
statefulSet:
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
properties:
@ -209,6 +218,526 @@ spec:
pod:
description: Configuration for the proxy Pod.
properties:
affinity:
description: Proxy Pod's affinity rules. By default, the Tailscale Kubernetes operator does not apply any affinity rules. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#affinity
properties:
nodeAffinity:
description: Describes node affinity scheduling rules for the pod.
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
items:
description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
properties:
preference:
description: A node selector term, associated with the corresponding weight.
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchFields:
description: A list of node selector requirements by node's fields.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
type: integer
required:
- preference
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
properties:
nodeSelectorTerms:
description: Required. A list of node selector terms. The terms are ORed.
items:
description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchFields:
description: A list of node selector requirements by node's fields.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
format: int32
type: integer
required:
- podAffinityTerm
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
type: array
type: object
podAntiAffinity:
description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
format: int32
type: integer
required:
- podAffinityTerm
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
type: array
type: object
type: object
annotations:
additionalProperties:
type: string
@ -637,8 +1166,6 @@ spec:
type: array
type: object
type: object
required:
- statefulSet
type: object
status:
description: Status of the ProxyClass. This is set and managed automatically. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

@ -20,3 +20,7 @@ spec:
env:
- name: TS_USERSPACE
value: "true"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP

@ -582,7 +582,7 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
if sts.ProxyClass != "" {
logger.Debugf("configuring proxy resources with ProxyClass %s", sts.ProxyClass)
ss = applyProxyClassToStatefulSet(proxyClass, ss)
ss = applyProxyClassToStatefulSet(proxyClass, ss, sts, logger)
}
updateSS := func(s *appsv1.StatefulSet) {
s.Spec = ss.Spec
@ -613,8 +613,28 @@ func mergeStatefulSetLabelsOrAnnots(current, custom map[string]string, managed [
return custom
}
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet) *appsv1.StatefulSet {
if pc == nil || ss == nil || pc.Spec.StatefulSet == nil {
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet, stsCfg *tailscaleSTSConfig, logger *zap.SugaredLogger) *appsv1.StatefulSet {
if pc == nil || ss == nil {
return ss
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable {
if stsCfg.TailnetTargetFQDN == "" && stsCfg.TailnetTargetIP == "" && !stsCfg.ForwardClusterTrafficViaL7IngressProxy {
enableMetrics(ss, pc)
} else if stsCfg.ForwardClusterTrafficViaL7IngressProxy {
// TODO (irbekrm): fix this
// For Ingress proxies that have been configured with
// tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation, all cluster traffic is forwarded to the
// Ingress backend(s).
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
// TODO (irbekrm): fix this
// For egress proxies, currently all cluster traffic is forwarded to the tailnet target.
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
}
}
if pc.Spec.StatefulSet == nil {
return ss
}
@ -641,6 +661,7 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
ss.Spec.Template.Spec.ImagePullSecrets = wantsPod.ImagePullSecrets
ss.Spec.Template.Spec.NodeName = wantsPod.NodeName
ss.Spec.Template.Spec.NodeSelector = wantsPod.NodeSelector
ss.Spec.Template.Spec.Affinity = wantsPod.Affinity
ss.Spec.Template.Spec.Tolerations = wantsPod.Tolerations
// Update containers.
@ -680,6 +701,21 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
return ss
}
func enableMetrics(ss *appsv1.StatefulSet, pc *tsapi.ProxyClass) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
// Serve metrics on on <pod-ip>:9001/debug/metrics. If
// we didn't specify Pod IP here, the proxy would, in
// some cases, also listen to its Tailscale IP- we don't
// want folks to start relying on this side-effect as a
// feature.
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports, corev1.ContainerPort{Name: "metrics", Protocol: "TCP", HostPort: 9001, ContainerPort: 9001})
break
}
}
}
// tailscaledConfig takes a proxy config, a newly generated auth key if
// generated and a Secret with the previous proxy state and auth key and
// produces returns tailscaled configuration and a hash of that configuration.

@ -14,6 +14,7 @@ import (
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
@ -51,6 +52,10 @@ func Test_statefulSetNameBase(t *testing.T) {
}
func Test_applyProxyClassToStatefulSet(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// Setup
proxyClassAllOpts := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
@ -66,6 +71,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
ImagePullSecrets: []corev1.LocalObjectReference{{Name: "docker-creds"}},
NodeName: "some-node",
NodeSelector: map[string]string{"beta.kubernetes.io/os": "linux"},
Affinity: &corev1.Affinity{NodeAffinity: &corev1.NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{}}},
Tolerations: []corev1.Toleration{{Key: "", Operator: "Exists"}},
TailscaleContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
@ -104,6 +110,12 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
},
},
}
proxyClassMetrics := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: true},
},
}
var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet
if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil {
t.Fatalf("unmarshaling userspace proxy template: %v", err)
@ -139,6 +151,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.InitContainers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.SecurityContext
@ -147,7 +160,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.InitContainers[0].Env = append(wantSS.Spec.Template.Spec.InitContainers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy())
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
@ -160,7 +173,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy())
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
@ -176,11 +189,12 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy())
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
@ -192,10 +206,19 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy())
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 5. Test that a ProxyClass with metrics enabled gets correctly applied to a StatefulSet.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9001, HostPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassMetrics, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
}
func mergeMapKeys(a, b map[string]string) map[string]string {

@ -15,6 +15,7 @@ import (
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@ -54,6 +55,10 @@ type configOpts struct {
func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{
Name: "tailscale",
Image: "tailscale/tailscale",
@ -205,18 +210,23 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err)
}
return applyProxyClassToStatefulSet(proxyClass, ss)
return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
}
return ss
}
func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{
Name: "tailscale",
Image: "tailscale/tailscale",
Env: []corev1.EnvVar{
{Name: "TS_USERSPACE", Value: "true"},
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_SERVE_CONFIG", Value: "/etc/tailscaled/serve-config"},
@ -301,7 +311,7 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err)
}
return applyProxyClassToStatefulSet(proxyClass, ss)
return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
}
return ss
}

@ -53,6 +53,7 @@ func runNetcheck(ctx context.Context, args []string) error {
return err
}
c := &netcheck.Client{
NetMon: netMon,
PortMapper: portmapper.NewClient(logf, netMon, nil, nil, nil),
UseDNSCache: false, // always resolve, don't cache
}

@ -23,7 +23,7 @@ import (
"golang.org/x/net/idna"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/util/dnsname"
)
@ -102,7 +102,7 @@ func runStatus(ctx context.Context, args []string) error {
if err != nil {
return err
}
statusURL := interfaces.HTTPOfListener(ln)
statusURL := netmon.HTTPOfListener(ln)
printf("Serving Tailscale status at %v ...\n", statusURL)
go func() {
<-ctx.Done()

@ -6,11 +6,9 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
W github.com/alexbrainman/sspi/internal/common from github.com/alexbrainman/sspi/negotiate
W 💣 github.com/alexbrainman/sspi/negotiate from tailscale.com/net/tshttpproxy
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
L github.com/coreos/go-systemd/v22/dbus from tailscale.com/clientupdate
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/pe+
W 💣 github.com/dblohm7/wingoes/pe from tailscale.com/util/winutil/authenticode
github.com/fxamacker/cbor/v2 from tailscale.com/tka
L 💣 github.com/godbus/dbus/v5 from github.com/coreos/go-systemd/v22/dbus
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
L github.com/google/nftables from tailscale.com/util/linuxfw
L 💣 github.com/google/nftables/alignedbuff from github.com/google/nftables/xt
@ -23,7 +21,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
github.com/gorilla/securecookie from github.com/gorilla/csrf
github.com/hdevalence/ed25519consensus from tailscale.com/clientupdate/distsign+
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
github.com/kballard/go-shellquote from tailscale.com/cmd/tailscale/cli
💣 github.com/mattn/go-colorable from tailscale.com/cmd/tailscale/cli
@ -60,7 +58,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
github.com/x448/float16 from github.com/fxamacker/cbor/v2
💣 go4.org/mem from tailscale.com/client/tailscale+
go4.org/netipx from tailscale.com/net/tsaddr+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/interfaces+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/netmon+
k8s.io/client-go/util/homedir from tailscale.com/cmd/tailscale/cli
nhooyr.io/websocket from tailscale.com/control/controlhttp+
nhooyr.io/websocket/internal/errd from nhooyr.io/websocket
@ -88,7 +86,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/health/healthmsg from tailscale.com/cmd/tailscale/cli
tailscale.com/hostinfo from tailscale.com/client/web+
tailscale.com/ipn from tailscale.com/client/tailscale+
@ -99,12 +97,11 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/net/dnscache from tailscale.com/control/controlhttp+
tailscale.com/net/dnsfallback from tailscale.com/control/controlhttp
tailscale.com/net/flowtrack from tailscale.com/net/packet+
💣 tailscale.com/net/interfaces from tailscale.com/cmd/tailscale/cli+
tailscale.com/net/netaddr from tailscale.com/ipn+
tailscale.com/net/netcheck from tailscale.com/cmd/tailscale/cli
tailscale.com/net/neterror from tailscale.com/net/netcheck+
tailscale.com/net/netknob from tailscale.com/net/netns
tailscale.com/net/netmon from tailscale.com/cmd/tailscale/cli+
💣 tailscale.com/net/netmon from tailscale.com/cmd/tailscale/cli+
tailscale.com/net/netns from tailscale.com/derp/derphttp+
tailscale.com/net/netutil from tailscale.com/client/tailscale+
tailscale.com/net/packet from tailscale.com/wgengine/capture+
@ -123,7 +120,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/tailcfg from tailscale.com/client/tailscale+
tailscale.com/tempfork/spf13/cobra from tailscale.com/cmd/tailscale/cli/ffcomplete+
tailscale.com/tka from tailscale.com/client/tailscale+
W tailscale.com/tsconst from tailscale.com/net/interfaces
W tailscale.com/tsconst from tailscale.com/net/netmon
tailscale.com/tstime from tailscale.com/control/controlhttp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/cmd/tailscale/cli+
@ -142,6 +139,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/types/key+
tailscale.com/types/views from tailscale.com/tailcfg+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netcheck+
tailscale.com/util/cloudenv from tailscale.com/net/dnscache+
tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy+
@ -271,7 +269,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
image/png from github.com/skip2/go-qrcode
io from archive/tar+
io/fs from archive/tar+
io/ioutil from github.com/godbus/dbus/v5+
io/ioutil from github.com/mitchellh/go-ps+
log from expvar+
log/internal from log
maps from tailscale.com/clientupdate+

@ -21,8 +21,8 @@ import (
"time"
"tailscale.com/derp/derphttp"
"tailscale.com/health"
"tailscale.com/ipn"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/tshttpproxy"
"tailscale.com/tailcfg"
@ -72,7 +72,7 @@ func debugMode(args []string) error {
}
func runMonitor(ctx context.Context, loop bool) error {
dump := func(st *interfaces.State) {
dump := func(st *netmon.State) {
j, _ := json.MarshalIndent(st, "", " ")
os.Stderr.Write(j)
}
@ -157,6 +157,7 @@ func getURL(ctx context.Context, urlStr string) error {
}
func checkDerp(ctx context.Context, derpRegion string) (err error) {
ht := new(health.Tracker)
req, err := http.NewRequestWithContext(ctx, "GET", ipn.DefaultControlURL+"/derpmap/default", nil)
if err != nil {
return fmt.Errorf("create derp map request: %w", err)
@ -195,6 +196,8 @@ func checkDerp(ctx context.Context, derpRegion string) (err error) {
c1 := derphttp.NewRegionClient(priv1, log.Printf, nil, getRegion)
c2 := derphttp.NewRegionClient(priv2, log.Printf, nil, getRegion)
c1.HealthTracker = ht
c2.HealthTracker = ht
defer func() {
if err != nil {
c1.Close()

@ -80,7 +80,6 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
L github.com/aws/smithy-go/waiter from github.com/aws/aws-sdk-go-v2/service/ssm
github.com/bits-and-blooms/bitset from github.com/gaissmai/bart
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
L github.com/coreos/go-systemd/v22/dbus from tailscale.com/clientupdate
LD 💣 github.com/creack/pty from tailscale.com/ssh/tailssh
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/com+
W 💣 github.com/dblohm7/wingoes/com from tailscale.com/cmd/tailscaled+
@ -98,7 +97,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
github.com/go-json-experiment/json/jsontext from tailscale.com/logtail
W 💣 github.com/go-ole/go-ole from github.com/go-ole/go-ole/oleutil+
W 💣 github.com/go-ole/go-ole/oleutil from tailscale.com/wgengine/winnet
L 💣 github.com/godbus/dbus/v5 from github.com/coreos/go-systemd/v22/dbus+
L 💣 github.com/godbus/dbus/v5 from tailscale.com/net/dns+
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
github.com/google/btree from gvisor.dev/gvisor/pkg/tcpip/header+
L github.com/google/nftables from tailscale.com/util/linuxfw
@ -119,7 +118,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
github.com/jellydator/ttlcache/v3 from tailscale.com/drive/driveimpl/compositedav
L github.com/jmespath/go-jmespath from github.com/aws/aws-sdk-go-v2/service/ssm
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
github.com/klauspost/compress from github.com/klauspost/compress/zstd
github.com/klauspost/compress/fse from github.com/klauspost/compress/huff0
@ -295,13 +294,12 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/net/dnscache from tailscale.com/control/controlclient+
tailscale.com/net/dnsfallback from tailscale.com/cmd/tailscaled+
tailscale.com/net/flowtrack from tailscale.com/net/packet+
💣 tailscale.com/net/interfaces from tailscale.com/cmd/tailscaled+
tailscale.com/net/netaddr from tailscale.com/ipn+
tailscale.com/net/netcheck from tailscale.com/wgengine/magicsock+
tailscale.com/net/neterror from tailscale.com/net/dns/resolver+
tailscale.com/net/netkernelconf from tailscale.com/ipn/ipnlocal
tailscale.com/net/netknob from tailscale.com/logpolicy+
tailscale.com/net/netmon from tailscale.com/cmd/tailscaled+
💣 tailscale.com/net/netmon from tailscale.com/cmd/tailscaled+
tailscale.com/net/netns from tailscale.com/cmd/tailscaled+
W 💣 tailscale.com/net/netstat from tailscale.com/portlist
tailscale.com/net/netutil from tailscale.com/client/tailscale+
@ -333,7 +331,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
LD tailscale.com/tempfork/gliderlabs/ssh from tailscale.com/ssh/tailssh
tailscale.com/tempfork/heap from tailscale.com/wgengine/magicsock
tailscale.com/tka from tailscale.com/client/tailscale+
W tailscale.com/tsconst from tailscale.com/net/interfaces
W tailscale.com/tsconst from tailscale.com/net/netmon
tailscale.com/tsd from tailscale.com/cmd/tailscaled+
tailscale.com/tstime from tailscale.com/control/controlclient+
tailscale.com/tstime/mono from tailscale.com/net/tstun+
@ -358,6 +356,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/types/structs from tailscale.com/control/controlclient+
tailscale.com/types/tkatype from tailscale.com/tka+
tailscale.com/types/views from tailscale.com/ipn/ipnlocal+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/control/controlclient+
tailscale.com/util/cloudenv from tailscale.com/net/dns/resolver+
tailscale.com/util/cmpver from tailscale.com/net/dns+

@ -358,7 +358,7 @@ func run() (err error) {
sys.Set(netMon)
}
pol := logpolicy.New(logtail.CollectionNode, netMon, nil /* use log.Printf */)
pol := logpolicy.New(logtail.CollectionNode, netMon, sys.HealthTracker(), nil /* use log.Printf */)
pol.SetVerbosityLevel(args.verbose)
logPol = pol
defer func() {
@ -393,8 +393,8 @@ func run() (err error) {
// Always clean up, even if we're going to run the server. This covers cases
// such as when a system was rebooted without shutting down, or tailscaled
// crashed, and would for example restore system DNS configuration.
dns.CleanUp(logf, args.tunname)
router.CleanUp(logf, args.tunname)
dns.CleanUp(logf, netMon, args.tunname)
router.CleanUp(logf, netMon, args.tunname)
// If the cleanUp flag was passed, then exit.
if args.cleanUp {
return nil
@ -651,6 +651,7 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
conf := wgengine.Config{
ListenPort: args.port,
NetMon: sys.NetMon.Get(),
HealthTracker: sys.HealthTracker(),
Dialer: sys.Dialer.Get(),
SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(),
@ -676,7 +677,7 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
// configuration being unavailable (from the noop
// manager). More in Issue 4017.
// TODO(bradfitz): add a Synology-specific DNS manager.
conf.DNS, err = dns.NewOSConfigurator(logf, "") // empty interface name
conf.DNS, err = dns.NewOSConfigurator(logf, sys.HealthTracker(), "") // empty interface name
if err != nil {
return false, fmt.Errorf("dns.NewOSConfigurator: %w", err)
}
@ -698,13 +699,13 @@ func tryEngine(logf logger.Logf, sys *tsd.System, name string) (onlyNetstack boo
return false, err
}
r, err := router.New(logf, dev, sys.NetMon.Get())
r, err := router.New(logf, dev, sys.NetMon.Get(), sys.HealthTracker())
if err != nil {
dev.Close()
return false, fmt.Errorf("creating router: %w", err)
}
d, err := dns.NewOSConfigurator(logf, devName)
d, err := dns.NewOSConfigurator(logf, sys.HealthTracker(), devName)
if err != nil {
dev.Close()
r.Close()

@ -104,9 +104,10 @@ func newIPN(jsConfig js.Value) map[string]any {
sys.Set(store)
dialer := &tsdial.Dialer{Logf: logf}
eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{
Dialer: dialer,
SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(),
Dialer: dialer,
SetSubsystem: sys.Set,
ControlKnobs: sys.ControlKnobs(),
HealthTracker: sys.HealthTracker(),
})
if err != nil {
log.Fatal(err)

@ -12,7 +12,6 @@ import (
"sync/atomic"
"time"
"tailscale.com/health"
"tailscale.com/logtail/backoff"
"tailscale.com/net/sockstats"
"tailscale.com/tailcfg"
@ -195,7 +194,7 @@ func NewNoStart(opts Options) (_ *Auto, err error) {
c.mapCtx, c.mapCancel = context.WithCancel(context.Background())
c.mapCtx = sockstats.WithSockStats(c.mapCtx, sockstats.LabelControlClientAuto, opts.Logf)
c.unregisterHealthWatch = health.RegisterWatcher(direct.ReportHealthChange)
c.unregisterHealthWatch = opts.HealthTracker.RegisterWatcher(direct.ReportHealthChange)
return c, nil
}
@ -316,7 +315,7 @@ func (c *Auto) authRoutine() {
}
if goal == nil {
health.SetAuthRoutineInError(nil)
c.direct.health.SetAuthRoutineInError(nil)
// Wait for user to Login or Logout.
<-ctx.Done()
c.logf("[v1] authRoutine: context done.")
@ -343,7 +342,7 @@ func (c *Auto) authRoutine() {
f = "TryLogin"
}
if err != nil {
health.SetAuthRoutineInError(err)
c.direct.health.SetAuthRoutineInError(err)
report(err, f)
bo.BackOff(ctx, err)
continue
@ -373,7 +372,7 @@ func (c *Auto) authRoutine() {
}
// success
health.SetAuthRoutineInError(nil)
c.direct.health.SetAuthRoutineInError(nil)
c.mu.Lock()
c.urlToVisit = ""
c.loggedIn = true
@ -503,11 +502,11 @@ func (c *Auto) mapRoutine() {
c.logf("[v1] mapRoutine: context done.")
continue
}
health.SetOutOfPollNetMap()
c.direct.health.SetOutOfPollNetMap()
err := c.direct.PollNetMap(ctx, mrs)
health.SetOutOfPollNetMap()
c.direct.health.SetOutOfPollNetMap()
c.mu.Lock()
c.inMapPoll = false
if c.state == StateSynchronized {

@ -36,7 +36,6 @@ import (
"tailscale.com/logtail"
"tailscale.com/net/dnscache"
"tailscale.com/net/dnsfallback"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/netutil"
"tailscale.com/net/tlsdial"
@ -56,6 +55,7 @@ import (
"tailscale.com/util/singleflight"
"tailscale.com/util/syspolicy"
"tailscale.com/util/systemd"
"tailscale.com/util/testenv"
"tailscale.com/util/zstdframe"
)
@ -68,7 +68,8 @@ type Direct struct {
serverURL string // URL of the tailcontrol server
clock tstime.Clock
logf logger.Logf
netMon *netmon.Monitor // or nil
netMon *netmon.Monitor // non-nil
health *health.Tracker
discoPubKey key.DiscoPublic
getMachinePrivKey func() (key.MachinePrivate, error)
debugFlags []string
@ -119,10 +120,10 @@ type Options struct {
Hostinfo *tailcfg.Hostinfo // non-nil passes ownership, nil means to use default using os.Hostname, etc
DiscoPublicKey key.DiscoPublic
Logf logger.Logf
HTTPTestClient *http.Client // optional HTTP client to use (for tests only)
NoiseTestClient *http.Client // optional HTTP client to use for noise RPCs (tests only)
DebugFlags []string // debug settings to send to control
NetMon *netmon.Monitor // optional network monitor
HTTPTestClient *http.Client // optional HTTP client to use (for tests only)
NoiseTestClient *http.Client // optional HTTP client to use for noise RPCs (tests only)
DebugFlags []string // debug settings to send to control
HealthTracker *health.Tracker
PopBrowserURL func(url string) // optional func to open browser
OnClientVersion func(*tailcfg.ClientVersion) // optional func to inform GUI of client version status
OnControlTime func(time.Time) // optional func to notify callers of new time from control
@ -211,6 +212,19 @@ func NewDirect(opts Options) (*Direct, error) {
if opts.GetMachinePrivateKey == nil {
return nil, errors.New("controlclient.New: no GetMachinePrivateKey specified")
}
if opts.Dialer == nil {
if testenv.InTest() {
panic("no Dialer")
}
return nil, errors.New("controlclient.New: no Dialer specified")
}
netMon := opts.Dialer.NetMon()
if netMon == nil {
if testenv.InTest() {
panic("no NetMon in Dialer")
}
return nil, errors.New("controlclient.New: Dialer has nil NetMon")
}
if opts.ControlKnobs == nil {
opts.ControlKnobs = &controlknobs.Knobs{}
}
@ -231,9 +245,8 @@ func NewDirect(opts Options) (*Direct, error) {
dnsCache := &dnscache.Resolver{
Forward: dnscache.Get().Forward, // use default cache's forwarder
UseLastGood: true,
LookupIPFallback: dnsfallback.MakeLookupFunc(opts.Logf, opts.NetMon),
LookupIPFallback: dnsfallback.MakeLookupFunc(opts.Logf, netMon),
Logf: opts.Logf,
NetMon: opts.NetMon,
}
httpc := opts.HTTPTestClient
@ -248,7 +261,7 @@ func NewDirect(opts Options) (*Direct, error) {
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.Proxy = tshttpproxy.ProxyFromEnvironment
tshttpproxy.SetTransportGetProxyConnectHeader(tr)
tr.TLSClientConfig = tlsdial.Config(serverURL.Hostname(), tr.TLSClientConfig)
tr.TLSClientConfig = tlsdial.Config(serverURL.Hostname(), opts.HealthTracker, tr.TLSClientConfig)
tr.DialContext = dnscache.Dialer(opts.Dialer.SystemDial, dnsCache)
tr.DialTLSContext = dnscache.TLSDialer(opts.Dialer.SystemDial, dnsCache, tr.TLSClientConfig)
tr.ForceAttemptHTTP2 = true
@ -270,7 +283,8 @@ func NewDirect(opts Options) (*Direct, error) {
authKey: opts.AuthKey,
discoPubKey: opts.DiscoPublicKey,
debugFlags: opts.DebugFlags,
netMon: opts.NetMon,
netMon: netMon,
health: opts.HealthTracker,
skipIPForwardingCheck: opts.SkipIPForwardingCheck,
pinger: opts.Pinger,
popBrowser: opts.PopBrowserURL,
@ -894,10 +908,10 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
ipForwardingBroken(hi.RoutableIPs, c.netMon.InterfaceState()) {
extraDebugFlags = append(extraDebugFlags, "warn-ip-forwarding-off")
}
if health.RouterHealth() != nil {
if c.health.RouterHealth() != nil {
extraDebugFlags = append(extraDebugFlags, "warn-router-unhealthy")
}
extraDebugFlags = health.AppendWarnableDebugFlags(extraDebugFlags)
extraDebugFlags = c.health.AppendWarnableDebugFlags(extraDebugFlags)
if hostinfo.DisabledEtcAptSource() {
extraDebugFlags = append(extraDebugFlags, "warn-etc-apt-source-disabled")
}
@ -970,7 +984,7 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
}
defer res.Body.Close()
health.NoteMapRequestHeard(request)
c.health.NoteMapRequestHeard(request)
watchdogTimer.Reset(watchdogTimeout)
if nu == nil {
@ -1041,7 +1055,7 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
metricMapResponseMessages.Add(1)
if isStreaming {
health.GotStreamedMapResponse()
c.health.GotStreamedMapResponse()
}
if pr := resp.PingRequest; pr != nil && c.isUniquePingRequest(pr) {
@ -1269,7 +1283,7 @@ var clock tstime.Clock = tstime.StdClock{}
//
// TODO(bradfitz): Change controlclient.Options.SkipIPForwardingCheck into a
// func([]netip.Prefix) error signature instead.
func ipForwardingBroken(routes []netip.Prefix, state *interfaces.State) bool {
func ipForwardingBroken(routes []netip.Prefix, state *netmon.State) bool {
warn, err := netutil.CheckIPForwarding(routes, state)
if err != nil {
// Oh well, we tried. This is just for debugging.
@ -1450,14 +1464,15 @@ func (c *Direct) getNoiseClient() (*NoiseClient, error) {
}
c.logf("[v1] creating new noise client")
nc, err := NewNoiseClient(NoiseOpts{
PrivKey: k,
ServerPubKey: serverNoiseKey,
ServerURL: c.serverURL,
Dialer: c.dialer,
DNSCache: c.dnsCache,
Logf: c.logf,
NetMon: c.netMon,
DialPlan: dp,
PrivKey: k,
ServerPubKey: serverNoiseKey,
ServerURL: c.serverURL,
Dialer: c.dialer,
DNSCache: c.dnsCache,
Logf: c.logf,
NetMon: c.netMon,
HealthTracker: c.health,
DialPlan: dp,
})
if err != nil {
return nil, err

@ -14,6 +14,7 @@ import (
"tailscale.com/hostinfo"
"tailscale.com/ipn/ipnstate"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
@ -31,7 +32,7 @@ func TestNewDirect(t *testing.T) {
GetMachinePrivateKey: func() (key.MachinePrivate, error) {
return k, nil
},
Dialer: new(tsdial.Dialer),
Dialer: tsdial.NewDialer(netmon.NewStatic()),
}
c, err := NewDirect(opts)
if err != nil {
@ -107,7 +108,7 @@ func TestTsmpPing(t *testing.T) {
GetMachinePrivateKey: func() (key.MachinePrivate, error) {
return k, nil
},
Dialer: new(tsdial.Dialer),
Dialer: tsdial.NewDialer(netmon.NewStatic()),
}
c, err := NewDirect(opts)

@ -19,6 +19,7 @@ import (
"golang.org/x/net/http2"
"tailscale.com/control/controlbase"
"tailscale.com/control/controlhttp"
"tailscale.com/health"
"tailscale.com/net/dnscache"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
@ -174,6 +175,7 @@ type NoiseClient struct {
logf logger.Logf
netMon *netmon.Monitor
health *health.Tracker
// mu only protects the following variables.
mu sync.Mutex
@ -204,6 +206,8 @@ type NoiseOpts struct {
// network interface state. This field can be nil; if so, the current
// state will be looked up dynamically.
NetMon *netmon.Monitor
// HealthTracker, if non-nil, is the health tracker to use.
HealthTracker *health.Tracker
// DialPlan, if set, is a function that should return an explicit plan
// on how to connect to the server.
DialPlan func() *tailcfg.ControlDialPlan
@ -247,6 +251,7 @@ func NewNoiseClient(opts NoiseOpts) (*NoiseClient, error) {
dialPlan: opts.DialPlan,
logf: opts.Logf,
netMon: opts.NetMon,
health: opts.HealthTracker,
}
// Create the HTTP/2 Transport using a net/http.Transport
@ -453,6 +458,7 @@ func (nc *NoiseClient) dial(ctx context.Context) (*noiseConn, error) {
DialPlan: dialPlan,
Logf: nc.logf,
NetMon: nc.netMon,
HealthTracker: nc.health,
Clock: tstime.StdClock{},
}).Dial(ctx)
if err != nil {

@ -16,6 +16,7 @@ import (
"golang.org/x/net/http2"
"tailscale.com/control/controlhttp"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
@ -73,7 +74,7 @@ func (tt noiseClientTest) run(t *testing.T) {
})
defer hs.Close()
dialer := new(tsdial.Dialer)
dialer := tsdial.NewDialer(netmon.NewStatic())
nc, err := NewNoiseClient(NoiseOpts{
PrivKey: clientPrivate,
ServerPubKey: serverPrivate.Public(),

@ -393,7 +393,6 @@ func (a *Dialer) resolver() *dnscache.Resolver {
LookupIPFallback: dnsfallback.MakeLookupFunc(a.logf, a.NetMon),
UseLastGood: true,
Logf: a.Logf, // not a.logf method; we want to propagate nil-ness
NetMon: a.NetMon,
}
}
@ -412,7 +411,6 @@ func (a *Dialer) tryURLUpgrade(ctx context.Context, u *url.URL, addr netip.Addr,
SingleHostStaticResult: []netip.Addr{addr},
SingleHost: u.Hostname(),
Logf: a.Logf, // not a.logf method; we want to propagate nil-ness
NetMon: a.NetMon,
}
} else {
dns = a.resolver()
@ -433,7 +431,7 @@ func (a *Dialer) tryURLUpgrade(ctx context.Context, u *url.URL, addr netip.Addr,
// Disable HTTP2, since h2 can't do protocol switching.
tr.TLSClientConfig.NextProtos = []string{}
tr.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{}
tr.TLSClientConfig = tlsdial.Config(a.Hostname, tr.TLSClientConfig)
tr.TLSClientConfig = tlsdial.Config(a.Hostname, a.HealthTracker, tr.TLSClientConfig)
if !tr.TLSClientConfig.InsecureSkipVerify {
panic("unexpected") // should be set by tlsdial.Config
}

@ -8,6 +8,7 @@ import (
"net/url"
"time"
"tailscale.com/health"
"tailscale.com/net/dnscache"
"tailscale.com/net/netmon"
"tailscale.com/tailcfg"
@ -79,6 +80,9 @@ type Dialer struct {
NetMon *netmon.Monitor
// HealthTracker, if non-nil, is the health tracker to use.
HealthTracker *health.Tracker
// DialPlan, if set, contains instructions from the control server on
// how to connect to it. If present, we will try the methods in this
// plan before falling back to DNS.

@ -22,6 +22,7 @@ import (
"tailscale.com/control/controlbase"
"tailscale.com/net/dnscache"
"tailscale.com/net/netmon"
"tailscale.com/net/socks5"
"tailscale.com/net/tsdial"
"tailscale.com/tailcfg"
@ -199,14 +200,17 @@ func testControlHTTP(t *testing.T, param httpTestParam) {
defer cancel()
}
netMon := netmon.NewStatic()
dialer := tsdial.NewDialer(netMon)
a := &Dialer{
Hostname: "localhost",
HTTPPort: strconv.Itoa(httpLn.Addr().(*net.TCPAddr).Port),
HTTPSPort: strconv.Itoa(httpsLn.Addr().(*net.TCPAddr).Port),
MachineKey: client,
ControlKey: server.Public(),
NetMon: netMon,
ProtocolVersion: testProtocolVersion,
Dialer: new(tsdial.Dialer).SystemDial,
Dialer: dialer.SystemDial,
Logf: t.Logf,
omitCertErrorLogging: true,
testFallbackDelay: fallbackDelay,
@ -643,7 +647,7 @@ func TestDialPlan(t *testing.T) {
dialer := closeTrackDialer{
t: t,
inner: new(tsdial.Dialer).SystemDial,
inner: tsdial.NewDialer(netmon.NewStatic()).SystemDial,
conns: make(map[*closeTrackConn]bool),
}
defer dialer.Done()

@ -72,6 +72,10 @@ type Knobs struct {
// ProbeUDPLifetime is whether the node should probe UDP path lifetime on
// the tail end of an active direct connection in magicsock.
ProbeUDPLifetime atomic.Bool
// AppCStoreRoutes is whether the node should store RouteInfo to StateStore
// if it's an app connector.
AppCStoreRoutes atomic.Bool
}
// UpdateFromNodeAttributes updates k (if non-nil) based on the provided self
@ -96,6 +100,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
forceNfTables = has(tailcfg.NodeAttrLinuxMustUseNfTables)
seamlessKeyRenewal = has(tailcfg.NodeAttrSeamlessKeyRenewal)
probeUDPLifetime = has(tailcfg.NodeAttrProbeUDPLifetime)
appCStoreRoutes = has(tailcfg.NodeAttrStoreAppCRoutes)
)
if has(tailcfg.NodeAttrOneCGNATEnable) {
@ -118,6 +123,7 @@ func (k *Knobs) UpdateFromNodeAttributes(capMap tailcfg.NodeCapMap) {
k.LinuxForceNfTables.Store(forceNfTables)
k.SeamlessKeyRenewal.Store(seamlessKeyRenewal)
k.ProbeUDPLifetime.Store(probeUDPLifetime)
k.AppCStoreRoutes.Store(appCStoreRoutes)
}
// AsDebugJSON returns k as something that can be marshalled with json.Marshal
@ -141,5 +147,6 @@ func (k *Knobs) AsDebugJSON() map[string]any {
"LinuxForceNfTables": k.LinuxForceNfTables.Load(),
"SeamlessKeyRenewal": k.SeamlessKeyRenewal.Load(),
"ProbeUDPLifetime": k.ProbeUDPLifetime.Load(),
"AppCStoreRoutes": k.AppCStoreRoutes.Load(),
}
}

@ -31,6 +31,7 @@ import (
"go4.org/mem"
"tailscale.com/derp"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/net/dnscache"
"tailscale.com/net/netmon"
"tailscale.com/net/netns"
@ -51,10 +52,11 @@ import (
// Send/Recv will completely re-establish the connection (unless Close
// has been called).
type Client struct {
TLSConfig *tls.Config // optional; nil means default
DNSCache *dnscache.Resolver // optional; nil means no caching
MeshKey string // optional; for trusted clients
IsProber bool // optional; for probers to optional declare themselves as such
TLSConfig *tls.Config // optional; nil means default
HealthTracker *health.Tracker // optional; used if non-nil only
DNSCache *dnscache.Resolver // optional; nil means no caching
MeshKey string // optional; for trusted clients
IsProber bool // optional; for probers to optional declare themselves as such
// WatchConnectionChanges is whether the client wishes to subscribe to
// notifications about clients connecting & disconnecting.
@ -69,7 +71,7 @@ type Client struct {
privateKey key.NodePrivate
logf logger.Logf
netMon *netmon.Monitor // optional; nil means interfaces will be looked up on-demand
netMon *netmon.Monitor // always non-nil
dialer func(ctx context.Context, network, addr string) (net.Conn, error)
// Either url or getRegion is non-nil:
@ -114,8 +116,11 @@ func (c *Client) String() string {
// NewRegionClient returns a new DERP-over-HTTP client. It connects lazily.
// To trigger a connection, use Connect.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
// The healthTracker parameter is also optional.
func NewRegionClient(privateKey key.NodePrivate, logf logger.Logf, netMon *netmon.Monitor, getRegion func() *tailcfg.DERPRegion) *Client {
if netMon == nil {
panic("nil netMon")
}
ctx, cancel := context.WithCancel(context.Background())
c := &Client{
privateKey: privateKey,
@ -137,7 +142,10 @@ func NewNetcheckClient(logf logger.Logf) *Client {
// NewClient returns a new DERP-over-HTTP client. It connects lazily.
// To trigger a connection, use Connect.
func NewClient(privateKey key.NodePrivate, serverURL string, logf logger.Logf) (*Client, error) {
func NewClient(privateKey key.NodePrivate, serverURL string, logf logger.Logf, netMon *netmon.Monitor) (*Client, error) {
if netMon == nil {
panic("nil netMon")
}
u, err := url.Parse(serverURL)
if err != nil {
return nil, fmt.Errorf("derphttp.NewClient: %v", err)
@ -154,6 +162,7 @@ func NewClient(privateKey key.NodePrivate, serverURL string, logf logger.Logf) (
ctx: ctx,
cancelCtx: cancel,
clock: tstime.StdClock{},
netMon: netMon,
}
return c, nil
}
@ -612,7 +621,7 @@ func (c *Client) dialRegion(ctx context.Context, reg *tailcfg.DERPRegion) (net.C
}
func (c *Client) tlsClient(nc net.Conn, node *tailcfg.DERPNode) *tls.Conn {
tlsConf := tlsdial.Config(c.tlsServerName(node), c.TLSConfig)
tlsConf := tlsdial.Config(c.tlsServerName(node), c.HealthTracker, c.TLSConfig)
if node != nil {
if node.InsecureForTests {
tlsConf.InsecureSkipVerify = true

@ -16,12 +16,15 @@ import (
"time"
"tailscale.com/derp"
"tailscale.com/net/netmon"
"tailscale.com/types/key"
)
func TestSendRecv(t *testing.T) {
serverPrivateKey := key.NewNode()
netMon := netmon.NewStatic()
const numClients = 3
var clientPrivateKeys []key.NodePrivate
var clientKeys []key.NodePublic
@ -68,7 +71,7 @@ func TestSendRecv(t *testing.T) {
}()
for i := range numClients {
key := clientPrivateKeys[i]
c, err := NewClient(key, serverURL, t.Logf)
c, err := NewClient(key, serverURL, t.Logf, netMon)
if err != nil {
t.Fatalf("client %d: %v", i, err)
}
@ -183,7 +186,7 @@ func TestPing(t *testing.T) {
}
}()
c, err := NewClient(key.NewNode(), serverURL, t.Logf)
c, err := NewClient(key.NewNode(), serverURL, t.Logf, netmon.NewStatic())
if err != nil {
t.Fatalf("NewClient: %v", err)
}
@ -236,7 +239,7 @@ func newTestServer(t *testing.T, k key.NodePrivate) (serverURL string, s *derp.S
}
func newWatcherClient(t *testing.T, watcherPrivateKey key.NodePrivate, serverToWatchURL string) (c *Client) {
c, err := NewClient(watcherPrivateKey, serverToWatchURL, t.Logf)
c, err := NewClient(watcherPrivateKey, serverToWatchURL, t.Logf, netmon.NewStatic())
if err != nil {
t.Fatal(err)
}

@ -8,7 +8,7 @@ import (
"sort"
"github.com/safchain/ethtool"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/types/logger"
"tailscale.com/util/set"
)
@ -21,7 +21,7 @@ func ethtoolImpl(logf logger.Logf) error {
}
defer et.Close()
interfaces.ForeachInterface(func(iface interfaces.Interface, _ []netip.Prefix) {
netmon.ForeachInterface(func(iface netmon.Interface, _ []netip.Prefix) {
ilogf := logger.WithPrefix(logf, iface.Name+": ")
features, err := et.Features(iface.Name)
if err == nil {

@ -39,7 +39,7 @@ func ParsePermissions(rawGrants [][]byte) (Permissions, error) {
var g grant
err := json.Unmarshal(rawGrant, &g)
if err != nil {
return nil, fmt.Errorf("unmarshal raw grants: %v", err)
return nil, fmt.Errorf("unmarshal raw grants %s: %v", rawGrant, err)
}
for _, share := range g.Shares {
existingPermission := permissions[share]

@ -120,4 +120,4 @@
in
flake-utils.lib.eachDefaultSystem (system: flakeForSystem nixpkgs system);
}
# nix-direnv cache busting line: sha256-KLJibt2Yv4WMxcmKHs+3e0sjVWFc9Fg0ak4sLoAocA0=
# nix-direnv cache busting line: sha256-UTFpy0v5HBLQ88kr3JVL2loRWrVfE1y2p40fFhFHE/I=

@ -16,7 +16,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/ssm v1.44.7
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/coreos/go-systemd/v22 v22.5.0
github.com/creack/pty v1.1.21
github.com/dave/courtney v0.4.0
github.com/dave/jennifer v1.7.0

@ -1 +1 @@
sha256-KLJibt2Yv4WMxcmKHs+3e0sjVWFc9Fg0ak4sLoAocA0=
sha256-UTFpy0v5HBLQ88kr3JVL2loRWrVfE1y2p40fFhFHE/I=

@ -217,8 +217,6 @@ github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.21 h1:1/QdRyBaHHJP61QkWMXlOIBfsgdDeeKfK8SYVUWJKf0=
@ -362,7 +360,6 @@ github.com/gobuffalo/flect v1.0.2 h1:eqjPGSo2WmjgY2XlpGwo2NXgL3RucAKo4k4qQMNA5sA
github.com/gobuffalo/flect v1.0.2/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh3PFscJRLDnkL547IeP7kpPe3uUhEg=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=

@ -9,6 +9,7 @@ import (
"errors"
"fmt"
"net/http"
"os"
"runtime"
"sort"
"sync"
@ -17,20 +18,59 @@ import (
"tailscale.com/envknob"
"tailscale.com/tailcfg"
"tailscale.com/types/opt"
"tailscale.com/util/cibuild"
"tailscale.com/util/mak"
"tailscale.com/util/multierr"
"tailscale.com/util/set"
)
var (
// mu guards everything in this var block.
mu sync.Mutex
debugHandler map[string]http.Handler
)
// ReceiveFunc is one of the three magicsock Receive funcs (IPv4, IPv6, or
// DERP).
type ReceiveFunc int
// ReceiveFunc indices for Tracker.MagicSockReceiveFuncs.
const (
ReceiveIPv4 ReceiveFunc = 0
ReceiveIPv6 ReceiveFunc = 1
ReceiveDERP ReceiveFunc = 2
)
func (f ReceiveFunc) String() string {
if f < 0 || int(f) >= len(receiveNames) {
return fmt.Sprintf("ReceiveFunc(%d)", f)
}
return receiveNames[f]
}
var receiveNames = []string{
ReceiveIPv4: "ReceiveIPv4",
ReceiveIPv6: "ReceiveIPv6",
ReceiveDERP: "ReceiveDERP",
}
// Tracker tracks the health of various Tailscale subsystems,
// comparing each subsystems' state with each other to make sure
// they're consistent based on the user's intended state.
type Tracker struct {
// MagicSockReceiveFuncs tracks the state of the three
// magicsock receive functions: IPv4, IPv6, and DERP.
MagicSockReceiveFuncs [3]ReceiveFuncStats // indexed by ReceiveFunc values
// mu guards everything that follows.
mu sync.Mutex
sysErr = map[Subsystem]error{} // error key => err (or nil for no error)
watchers = set.HandleSet[func(Subsystem, error)]{} // opt func to run if error state changes
warnables = set.Set[*Warnable]{}
timer *time.Timer
warnables []*Warnable // keys ever set
warnableVal map[*Warnable]error
debugHandler = map[string]http.Handler{}
sysErr map[Subsystem]error // subsystem => err (or nil for no error)
watchers set.HandleSet[func(Subsystem, error)] // opt func to run if error state changes
timer *time.Timer
inMapPoll bool
inMapPollSince time.Time
@ -38,19 +78,19 @@ var (
lastStreamedMapResponse time.Time
derpHomeRegion int
derpHomeless bool
derpRegionConnected = map[int]bool{}
derpRegionHealthProblem = map[int]string{}
derpRegionLastFrame = map[int]time.Time{}
derpRegionConnected map[int]bool
derpRegionHealthProblem map[int]string
derpRegionLastFrame map[int]time.Time
lastMapRequestHeard time.Time // time we got a 200 from control for a MapRequest
ipnState string
ipnWantRunning bool
anyInterfaceUp = true // until told otherwise
anyInterfaceUp opt.Bool // empty means unknown (assume true)
udp4Unbound bool
controlHealth []string
lastLoginErr error
localLogConfigErr error
tlsConnectionErrors = map[string]error{} // map[ServerName]error
)
tlsConnectionErrors map[string]error // map[ServerName]error
}
// Subsystem is the name of a subsystem whose health can be monitored.
type Subsystem string
@ -76,16 +116,16 @@ const (
SysTKA = Subsystem("tailnet-lock")
)
// NewWarnable returns a new warnable item that the caller can mark
// as health or in warning state.
// NewWarnable returns a new warnable item that the caller can mark as health or
// in warning state via Tracker.SetWarnable.
//
// NewWarnable is generally called in init and stored in a package global. It
// can be used by multiple Trackers.
func NewWarnable(opts ...WarnableOpt) *Warnable {
w := new(Warnable)
for _, o := range opts {
o.mod(w)
}
mu.Lock()
defer mu.Unlock()
warnables.Add(w)
return w
}
@ -118,49 +158,66 @@ type warnOptFunc func(*Warnable)
func (f warnOptFunc) mod(w *Warnable) { f(w) }
// Warnable is a health check item that may or may not be in a bad warning state.
// The caller of NewWarnable is responsible for calling Set to update the state.
// The caller of NewWarnable is responsible for calling Tracker.SetWarnable to update the state.
type Warnable struct {
debugFlag string // optional MapRequest.DebugFlag to send when unhealthy
// If true, this warning is related to configuration of networking stack
// on the machine that impacts connectivity.
hasConnectivityImpact bool
}
isSet atomic.Bool
mu sync.Mutex
err error
// nil reports whether t is nil.
// It exists to accept nil *Tracker receivers on all methods
// to at least not crash. But because a nil receiver indicates
// some lost Tracker plumbing, we want to capture stack trace
// samples when it occurs.
func (t *Tracker) nil() bool {
if t != nil {
return false
}
if cibuild.On() {
stack := make([]byte, 1<<10)
stack = stack[:runtime.Stack(stack, false)]
fmt.Fprintf(os.Stderr, "## WARNING: (non-fatal) nil health.Tracker (being strict in CI):\n%s\n", stack)
}
// TODO(bradfitz): open source our "unexpected" package
// and use it here to capture samples of stacks where
// t is nil.
return true
}
// Set updates the Warnable's state.
// If non-nil, it's considered unhealthy.
func (w *Warnable) Set(err error) {
w.mu.Lock()
defer w.mu.Unlock()
w.err = err
w.isSet.Store(err != nil)
}
func (w *Warnable) get() error {
if !w.isSet.Load() {
return nil
func (t *Tracker) SetWarnable(w *Warnable, err error) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
l0 := len(t.warnableVal)
mak.Set(&t.warnableVal, w, err)
if len(t.warnableVal) != l0 {
t.warnables = append(t.warnables, w)
}
w.mu.Lock()
defer w.mu.Unlock()
return w.err
}
// AppendWarnableDebugFlags appends to base any health items that are currently in failed
// state and were created with MapDebugFlag.
func AppendWarnableDebugFlags(base []string) []string {
func (t *Tracker) AppendWarnableDebugFlags(base []string) []string {
if t.nil() {
return base
}
ret := base
mu.Lock()
defer mu.Unlock()
for w := range warnables {
t.mu.Lock()
defer t.mu.Unlock()
for w, err := range t.warnableVal {
if w.debugFlag == "" {
continue
}
if err := w.get(); err != nil {
if err != nil {
ret = append(ret, w.debugFlag)
}
}
@ -172,75 +229,87 @@ func AppendWarnableDebugFlags(base []string) []string {
// error changes state either to unhealthy or from unhealthy. It is
// not called on transition from unknown to healthy. It must be non-nil
// and is run in its own goroutine. The returned func unregisters it.
func RegisterWatcher(cb func(key Subsystem, err error)) (unregister func()) {
mu.Lock()
defer mu.Unlock()
handle := watchers.Add(cb)
if timer == nil {
timer = time.AfterFunc(time.Minute, timerSelfCheck)
func (t *Tracker) RegisterWatcher(cb func(key Subsystem, err error)) (unregister func()) {
if t.nil() {
return func() {}
}
t.mu.Lock()
defer t.mu.Unlock()
if t.watchers == nil {
t.watchers = set.HandleSet[func(Subsystem, error)]{}
}
handle := t.watchers.Add(cb)
if t.timer == nil {
t.timer = time.AfterFunc(time.Minute, t.timerSelfCheck)
}
return func() {
mu.Lock()
defer mu.Unlock()
delete(watchers, handle)
if len(watchers) == 0 && timer != nil {
timer.Stop()
timer = nil
t.mu.Lock()
defer t.mu.Unlock()
delete(t.watchers, handle)
if len(t.watchers) == 0 && t.timer != nil {
t.timer.Stop()
t.timer = nil
}
}
}
// SetRouterHealth sets the state of the wgengine/router.Router.
func SetRouterHealth(err error) { setErr(SysRouter, err) }
func (t *Tracker) SetRouterHealth(err error) { t.setErr(SysRouter, err) }
// RouterHealth returns the wgengine/router.Router error state.
func RouterHealth() error { return get(SysRouter) }
func (t *Tracker) RouterHealth() error { return t.get(SysRouter) }
// SetDNSHealth sets the state of the net/dns.Manager
func SetDNSHealth(err error) { setErr(SysDNS, err) }
func (t *Tracker) SetDNSHealth(err error) { t.setErr(SysDNS, err) }
// DNSHealth returns the net/dns.Manager error state.
func DNSHealth() error { return get(SysDNS) }
func (t *Tracker) DNSHealth() error { return t.get(SysDNS) }
// SetDNSOSHealth sets the state of the net/dns.OSConfigurator
func SetDNSOSHealth(err error) { setErr(SysDNSOS, err) }
func (t *Tracker) SetDNSOSHealth(err error) { t.setErr(SysDNSOS, err) }
// SetDNSManagerHealth sets the state of the Linux net/dns manager's
// discovery of the /etc/resolv.conf situation.
func SetDNSManagerHealth(err error) { setErr(SysDNSManager, err) }
func (t *Tracker) SetDNSManagerHealth(err error) { t.setErr(SysDNSManager, err) }
// DNSOSHealth returns the net/dns.OSConfigurator error state.
func DNSOSHealth() error { return get(SysDNSOS) }
func (t *Tracker) DNSOSHealth() error { return t.get(SysDNSOS) }
// SetTKAHealth sets the health of the tailnet key authority.
func SetTKAHealth(err error) { setErr(SysTKA, err) }
func (t *Tracker) SetTKAHealth(err error) { t.setErr(SysTKA, err) }
// TKAHealth returns the tailnet key authority error state.
func TKAHealth() error { return get(SysTKA) }
func (t *Tracker) TKAHealth() error { return t.get(SysTKA) }
// SetLocalLogConfigHealth sets the error state of this client's local log configuration.
func SetLocalLogConfigHealth(err error) {
mu.Lock()
defer mu.Unlock()
localLogConfigErr = err
func (t *Tracker) SetLocalLogConfigHealth(err error) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.localLogConfigErr = err
}
// SetTLSConnectionError sets the error state for connections to a specific
// host. Setting the error to nil will clear any previously-set error.
func SetTLSConnectionError(host string, err error) {
mu.Lock()
defer mu.Unlock()
func (t *Tracker) SetTLSConnectionError(host string, err error) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
if err == nil {
delete(tlsConnectionErrors, host)
delete(t.tlsConnectionErrors, host)
} else {
tlsConnectionErrors[host] = err
mak.Set(&t.tlsConnectionErrors, host, err)
}
}
func RegisterDebugHandler(typ string, h http.Handler) {
mu.Lock()
defer mu.Unlock()
debugHandler[typ] = h
mak.Set(&debugHandler, typ, h)
}
func DebugHandler(typ string) http.Handler {
@ -249,24 +318,33 @@ func DebugHandler(typ string) http.Handler {
return debugHandler[typ]
}
func get(key Subsystem) error {
mu.Lock()
defer mu.Unlock()
return sysErr[key]
func (t *Tracker) get(key Subsystem) error {
if t.nil() {
return nil
}
t.mu.Lock()
defer t.mu.Unlock()
return t.sysErr[key]
}
func setErr(key Subsystem, err error) {
mu.Lock()
defer mu.Unlock()
setLocked(key, err)
func (t *Tracker) setErr(key Subsystem, err error) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.setLocked(key, err)
}
func setLocked(key Subsystem, err error) {
old, ok := sysErr[key]
func (t *Tracker) setLocked(key Subsystem, err error) {
if t.sysErr == nil {
t.sysErr = map[Subsystem]error{}
}
old, ok := t.sysErr[key]
if !ok && err == nil {
// Initial happy path.
sysErr[key] = nil
selfCheckLocked()
t.sysErr[key] = nil
t.selfCheckLocked()
return
}
if ok && (old == nil) == (err == nil) {
@ -274,22 +352,25 @@ func setLocked(key Subsystem, err error) {
// don't run callbacks, but exact error might've
// changed, so note it.
if err != nil {
sysErr[key] = err
t.sysErr[key] = err
}
return
}
sysErr[key] = err
selfCheckLocked()
for _, cb := range watchers {
t.sysErr[key] = err
t.selfCheckLocked()
for _, cb := range t.watchers {
go cb(key, err)
}
}
func SetControlHealth(problems []string) {
mu.Lock()
defer mu.Unlock()
controlHealth = problems
selfCheckLocked()
func (t *Tracker) SetControlHealth(problems []string) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.controlHealth = problems
t.selfCheckLocked()
}
// GotStreamedMapResponse notes that we got a tailcfg.MapResponse
@ -297,177 +378,224 @@ func SetControlHealth(problems []string) {
//
// This also notes that a map poll is in progress. To unset that, call
// SetOutOfPollNetMap().
func GotStreamedMapResponse() {
mu.Lock()
defer mu.Unlock()
lastStreamedMapResponse = time.Now()
if !inMapPoll {
inMapPoll = true
inMapPollSince = time.Now()
func (t *Tracker) GotStreamedMapResponse() {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.lastStreamedMapResponse = time.Now()
if !t.inMapPoll {
t.inMapPoll = true
t.inMapPollSince = time.Now()
}
selfCheckLocked()
t.selfCheckLocked()
}
// SetOutOfPollNetMap records that the client is no longer in
// an HTTP map request long poll to the control plane.
func SetOutOfPollNetMap() {
mu.Lock()
defer mu.Unlock()
if !inMapPoll {
func (t *Tracker) SetOutOfPollNetMap() {
if t.nil() {
return
}
inMapPoll = false
lastMapPollEndedAt = time.Now()
selfCheckLocked()
t.mu.Lock()
defer t.mu.Unlock()
if !t.inMapPoll {
return
}
t.inMapPoll = false
t.lastMapPollEndedAt = time.Now()
t.selfCheckLocked()
}
// GetInPollNetMap reports whether the client has an open
// HTTP long poll open to the control plane.
func GetInPollNetMap() bool {
mu.Lock()
defer mu.Unlock()
return inMapPoll
func (t *Tracker) GetInPollNetMap() bool {
if t.nil() {
return false
}
t.mu.Lock()
defer t.mu.Unlock()
return t.inMapPoll
}
// SetMagicSockDERPHome notes what magicsock's view of its home DERP is.
//
// The homeless parameter is whether magicsock is running in DERP-disconnected
// mode, without discovering and maintaining a connection to its home DERP.
func SetMagicSockDERPHome(region int, homeless bool) {
mu.Lock()
defer mu.Unlock()
derpHomeRegion = region
derpHomeless = homeless
selfCheckLocked()
func (t *Tracker) SetMagicSockDERPHome(region int, homeless bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.derpHomeRegion = region
t.derpHomeless = homeless
t.selfCheckLocked()
}
// NoteMapRequestHeard notes whenever we successfully sent a map request
// to control for which we received a 200 response.
func NoteMapRequestHeard(mr *tailcfg.MapRequest) {
mu.Lock()
defer mu.Unlock()
func (t *Tracker) NoteMapRequestHeard(mr *tailcfg.MapRequest) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
// TODO: extract mr.HostInfo.NetInfo.PreferredDERP, compare
// against SetMagicSockDERPHome and
// SetDERPRegionConnectedState
lastMapRequestHeard = time.Now()
selfCheckLocked()
t.lastMapRequestHeard = time.Now()
t.selfCheckLocked()
}
func SetDERPRegionConnectedState(region int, connected bool) {
mu.Lock()
defer mu.Unlock()
derpRegionConnected[region] = connected
selfCheckLocked()
func (t *Tracker) SetDERPRegionConnectedState(region int, connected bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
mak.Set(&t.derpRegionConnected, region, connected)
t.selfCheckLocked()
}
// SetDERPRegionHealth sets or clears any problem associated with the
// provided DERP region.
func SetDERPRegionHealth(region int, problem string) {
mu.Lock()
defer mu.Unlock()
func (t *Tracker) SetDERPRegionHealth(region int, problem string) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
if problem == "" {
delete(derpRegionHealthProblem, region)
delete(t.derpRegionHealthProblem, region)
} else {
derpRegionHealthProblem[region] = problem
mak.Set(&t.derpRegionHealthProblem, region, problem)
}
selfCheckLocked()
t.selfCheckLocked()
}
// NoteDERPRegionReceivedFrame is called to note that a frame was received from
// the given DERP region at the current time.
func NoteDERPRegionReceivedFrame(region int) {
mu.Lock()
defer mu.Unlock()
derpRegionLastFrame[region] = time.Now()
selfCheckLocked()
func (t *Tracker) NoteDERPRegionReceivedFrame(region int) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
mak.Set(&t.derpRegionLastFrame, region, time.Now())
t.selfCheckLocked()
}
// GetDERPRegionReceivedTime returns the last time that a frame was received
// from the given DERP region, or the zero time if no communication with that
// region has occurred.
func GetDERPRegionReceivedTime(region int) time.Time {
mu.Lock()
defer mu.Unlock()
return derpRegionLastFrame[region]
func (t *Tracker) GetDERPRegionReceivedTime(region int) time.Time {
if t.nil() {
return time.Time{}
}
t.mu.Lock()
defer t.mu.Unlock()
return t.derpRegionLastFrame[region]
}
// state is an ipn.State.String() value: "Running", "Stopped", "NeedsLogin", etc.
func SetIPNState(state string, wantRunning bool) {
mu.Lock()
defer mu.Unlock()
ipnState = state
ipnWantRunning = wantRunning
selfCheckLocked()
func (t *Tracker) SetIPNState(state string, wantRunning bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.ipnState = state
t.ipnWantRunning = wantRunning
t.selfCheckLocked()
}
// SetAnyInterfaceUp sets whether any network interface is up.
func SetAnyInterfaceUp(up bool) {
mu.Lock()
defer mu.Unlock()
anyInterfaceUp = up
selfCheckLocked()
func (t *Tracker) SetAnyInterfaceUp(up bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.anyInterfaceUp.Set(up)
t.selfCheckLocked()
}
// SetUDP4Unbound sets whether the udp4 bind failed completely.
func SetUDP4Unbound(unbound bool) {
mu.Lock()
defer mu.Unlock()
udp4Unbound = unbound
selfCheckLocked()
func (t *Tracker) SetUDP4Unbound(unbound bool) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.udp4Unbound = unbound
t.selfCheckLocked()
}
// SetAuthRoutineInError records the latest error encountered as a result of a
// login attempt. Providing a nil error indicates successful login, or that
// being logged in w/coordination is not currently desired.
func SetAuthRoutineInError(err error) {
mu.Lock()
defer mu.Unlock()
lastLoginErr = err
func (t *Tracker) SetAuthRoutineInError(err error) {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.lastLoginErr = err
}
func timerSelfCheck() {
mu.Lock()
defer mu.Unlock()
checkReceiveFuncs()
selfCheckLocked()
if timer != nil {
timer.Reset(time.Minute)
func (t *Tracker) timerSelfCheck() {
if t.nil() {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.checkReceiveFuncsLocked()
t.selfCheckLocked()
if t.timer != nil {
t.timer.Reset(time.Minute)
}
}
func selfCheckLocked() {
if ipnState == "" {
func (t *Tracker) selfCheckLocked() {
if t.ipnState == "" {
// Don't check yet.
return
}
setLocked(SysOverall, overallErrorLocked())
t.setLocked(SysOverall, t.overallErrorLocked())
}
// OverallError returns a summary of the health state.
//
// If there are multiple problems, the error will be of type
// multierr.Error.
func OverallError() error {
mu.Lock()
defer mu.Unlock()
return overallErrorLocked()
func (t *Tracker) OverallError() error {
if t.nil() {
return nil
}
t.mu.Lock()
defer t.mu.Unlock()
return t.overallErrorLocked()
}
var fakeErrForTesting = envknob.RegisterString("TS_DEBUG_FAKE_HEALTH_ERROR")
// networkErrorf creates an error that indicates issues with outgoing network
// networkErrorfLocked creates an error that indicates issues with outgoing network
// connectivity. Any active warnings related to network connectivity will
// automatically be appended to it.
func networkErrorf(format string, a ...any) error {
//
// t.mu must be held.
func (t *Tracker) networkErrorfLocked(format string, a ...any) error {
errs := []error{
fmt.Errorf(format, a...),
}
for w := range warnables {
for _, w := range t.warnables {
if !w.hasConnectivityImpact {
continue
}
if err := w.get(); err != nil {
if err := t.warnableVal[w]; err != nil {
errs = append(errs, err)
}
}
@ -477,81 +605,82 @@ func networkErrorf(format string, a ...any) error {
return multierr.New(errs...)
}
var errNetworkDown = networkErrorf("network down")
var errNotInMapPoll = networkErrorf("not in map poll")
var errNetworkDown = errors.New("network down")
var errNotInMapPoll = errors.New("not in map poll")
var errNoDERPHome = errors.New("no DERP home")
var errNoUDP4Bind = networkErrorf("no udp4 bind")
var errNoUDP4Bind = errors.New("no udp4 bind")
func overallErrorLocked() error {
if !anyInterfaceUp {
func (t *Tracker) overallErrorLocked() error {
if v, ok := t.anyInterfaceUp.Get(); ok && !v {
return errNetworkDown
}
if localLogConfigErr != nil {
return localLogConfigErr
if t.localLogConfigErr != nil {
return t.localLogConfigErr
}
if !ipnWantRunning {
return fmt.Errorf("state=%v, wantRunning=%v", ipnState, ipnWantRunning)
if !t.ipnWantRunning {
return fmt.Errorf("state=%v, wantRunning=%v", t.ipnState, t.ipnWantRunning)
}
if lastLoginErr != nil {
return fmt.Errorf("not logged in, last login error=%v", lastLoginErr)
if t.lastLoginErr != nil {
return fmt.Errorf("not logged in, last login error=%v", t.lastLoginErr)
}
now := time.Now()
if !inMapPoll && (lastMapPollEndedAt.IsZero() || now.Sub(lastMapPollEndedAt) > 10*time.Second) {
if !t.inMapPoll && (t.lastMapPollEndedAt.IsZero() || now.Sub(t.lastMapPollEndedAt) > 10*time.Second) {
return errNotInMapPoll
}
const tooIdle = 2*time.Minute + 5*time.Second
if d := now.Sub(lastStreamedMapResponse).Round(time.Second); d > tooIdle {
return networkErrorf("no map response in %v", d)
if d := now.Sub(t.lastStreamedMapResponse).Round(time.Second); d > tooIdle {
return t.networkErrorfLocked("no map response in %v", d)
}
if !derpHomeless {
rid := derpHomeRegion
if !t.derpHomeless {
rid := t.derpHomeRegion
if rid == 0 {
return errNoDERPHome
}
if !derpRegionConnected[rid] {
return networkErrorf("not connected to home DERP region %v", rid)
if !t.derpRegionConnected[rid] {
return t.networkErrorfLocked("not connected to home DERP region %v", rid)
}
if d := now.Sub(derpRegionLastFrame[rid]).Round(time.Second); d > tooIdle {
return networkErrorf("haven't heard from home DERP region %v in %v", rid, d)
if d := now.Sub(t.derpRegionLastFrame[rid]).Round(time.Second); d > tooIdle {
return t.networkErrorfLocked("haven't heard from home DERP region %v in %v", rid, d)
}
}
if udp4Unbound {
if t.udp4Unbound {
return errNoUDP4Bind
}
// TODO: use
_ = inMapPollSince
_ = lastMapPollEndedAt
_ = lastStreamedMapResponse
_ = lastMapRequestHeard
_ = t.inMapPollSince
_ = t.lastMapPollEndedAt
_ = t.lastStreamedMapResponse
_ = t.lastMapRequestHeard
var errs []error
for _, recv := range receiveFuncs {
if recv.missing {
errs = append(errs, fmt.Errorf("%s is not running", recv.name))
for i := range t.MagicSockReceiveFuncs {
f := &t.MagicSockReceiveFuncs[i]
if f.missing {
errs = append(errs, fmt.Errorf("%s is not running", f.name))
}
}
for sys, err := range sysErr {
for sys, err := range t.sysErr {
if err == nil || sys == SysOverall {
continue
}
errs = append(errs, fmt.Errorf("%v: %w", sys, err))
}
for w := range warnables {
if err := w.get(); err != nil {
for _, w := range t.warnables {
if err := t.warnableVal[w]; err != nil {
errs = append(errs, err)
}
}
for regionID, problem := range derpRegionHealthProblem {
for regionID, problem := range t.derpRegionHealthProblem {
errs = append(errs, fmt.Errorf("derp%d: %v", regionID, problem))
}
for _, s := range controlHealth {
for _, s := range t.controlHealth {
errs = append(errs, errors.New(s))
}
if err := envknob.ApplyDiskConfigError(); err != nil {
errs = append(errs, err)
}
for serverName, err := range tlsConnectionErrors {
for serverName, err := range t.tlsConnectionErrors {
errs = append(errs, fmt.Errorf("TLS connection error for %q: %w", serverName, err))
}
if e := fakeErrForTesting(); len(errs) == 0 && e != "" {
@ -564,63 +693,69 @@ func overallErrorLocked() error {
return multierr.New(errs...)
}
var (
ReceiveIPv4 = ReceiveFuncStats{name: "ReceiveIPv4"}
ReceiveIPv6 = ReceiveFuncStats{name: "ReceiveIPv6"}
ReceiveDERP = ReceiveFuncStats{name: "ReceiveDERP"}
receiveFuncs = []*ReceiveFuncStats{&ReceiveIPv4, &ReceiveIPv6, &ReceiveDERP}
)
func init() {
if runtime.GOOS == "js" {
receiveFuncs = receiveFuncs[2:] // ignore IPv4 and IPv6
}
}
// ReceiveFuncStats tracks the calls made to a wireguard-go receive func.
type ReceiveFuncStats struct {
// name is the name of the receive func.
// It's lazily populated.
name string
// numCalls is the number of times the receive func has ever been called.
// It is required because it is possible for a receive func's wireguard-go goroutine
// to be active even though the receive func isn't.
// The wireguard-go goroutine alternates between calling the receive func and
// processing what the func returned.
numCalls uint64 // accessed atomically
numCalls atomic.Uint64
// prevNumCalls is the value of numCalls last time the health check examined it.
prevNumCalls uint64
// inCall indicates whether the receive func is currently running.
inCall uint32 // bool, accessed atomically
inCall atomic.Bool
// missing indicates whether the receive func is not running.
missing bool
}
func (s *ReceiveFuncStats) Enter() {
atomic.AddUint64(&s.numCalls, 1)
atomic.StoreUint32(&s.inCall, 1)
s.numCalls.Add(1)
s.inCall.Store(true)
}
func (s *ReceiveFuncStats) Exit() {
atomic.StoreUint32(&s.inCall, 0)
s.inCall.Store(false)
}
func checkReceiveFuncs() {
for _, recv := range receiveFuncs {
recv.missing = false
prev := recv.prevNumCalls
numCalls := atomic.LoadUint64(&recv.numCalls)
recv.prevNumCalls = numCalls
// ReceiveFuncStats returns the ReceiveFuncStats tracker for the given func
// type.
//
// If t is nil, it returns nil.
func (t *Tracker) ReceiveFuncStats(which ReceiveFunc) *ReceiveFuncStats {
if t == nil {
return nil
}
return &t.MagicSockReceiveFuncs[which]
}
func (t *Tracker) checkReceiveFuncsLocked() {
for i := range t.MagicSockReceiveFuncs {
f := &t.MagicSockReceiveFuncs[i]
if f.name == "" {
f.name = (ReceiveFunc(i)).String()
}
if runtime.GOOS == "js" && i < 2 {
// Skip IPv4 and IPv6 on js.
continue
}
f.missing = false
prev := f.prevNumCalls
numCalls := f.numCalls.Load()
f.prevNumCalls = numCalls
if numCalls > prev {
// OK: the function has gotten called since last we checked
continue
}
if atomic.LoadUint32(&recv.inCall) == 1 {
if f.inCall.Load() {
// OK: the function is active, probably blocked due to inactivity
continue
}
// Not OK: The function is not active, and not accumulating new calls.
// It is probably MIA.
recv.missing = true
f.missing = true
}
}

@ -8,17 +8,15 @@ import (
"fmt"
"reflect"
"testing"
"tailscale.com/util/set"
)
func TestAppendWarnableDebugFlags(t *testing.T) {
resetWarnables()
var tr Tracker
for i := range 10 {
w := NewWarnable(WithMapDebugFlag(fmt.Sprint(i)))
if i%2 == 0 {
w.Set(errors.New("boom"))
tr.SetWarnable(w, errors.New("boom"))
}
}
@ -27,15 +25,27 @@ func TestAppendWarnableDebugFlags(t *testing.T) {
var got []string
for range 20 {
got = append(got[:0], "z", "y")
got = AppendWarnableDebugFlags(got)
got = tr.AppendWarnableDebugFlags(got)
if !reflect.DeepEqual(got, want) {
t.Fatalf("AppendWarnableDebugFlags = %q; want %q", got, want)
}
}
}
func resetWarnables() {
mu.Lock()
defer mu.Unlock()
warnables = set.Set[*Warnable]{}
// Test that all exported methods on *Tracker don't panic with a nil receiver.
func TestNilMethodsDontCrash(t *testing.T) {
var nilt *Tracker
rv := reflect.ValueOf(nilt)
for i := 0; i < rv.NumMethod(); i++ {
mt := rv.Type().Method(i)
t.Logf("calling Tracker.%s ...", mt.Name)
var args []reflect.Value
for j := 0; j < mt.Type.NumIn(); j++ {
if j == 0 && mt.Type.In(j) == reflect.TypeFor[*Tracker]() {
continue
}
args = append(args, reflect.Zero(mt.Type.In(j)))
}
rv.Method(i).Call(args)
}
}

@ -4,10 +4,10 @@
package ipnlocal
import (
"cmp"
"fmt"
"os"
"slices"
"strings"
"tailscale.com/drive"
"tailscale.com/ipn"
@ -318,40 +318,47 @@ func (b *LocalBackend) updateDrivePeersLocked(nm *netmap.NetworkMap) {
func (b *LocalBackend) driveRemotesFromPeers(nm *netmap.NetworkMap) []*drive.Remote {
driveRemotes := make([]*drive.Remote, 0, len(nm.Peers))
for _, p := range nm.Peers {
// Exclude mullvad exit nodes from list of Taildrive peers
// TODO(oxtoacart) - once we have a better mechanism for finding only accessible sharers
// (see below) we can remove this logic.
if strings.HasSuffix(p.Name(), ".mullvad.ts.net.") {
continue
}
peerID := p.ID()
url := fmt.Sprintf("%s/%s", peerAPIBase(nm, p), taildrivePrefix[1:])
driveRemotes = append(driveRemotes, &drive.Remote{
Name: p.DisplayName(false),
URL: url,
Available: func() bool {
// TODO(oxtoacart): need to figure out a performant and reliable way to only
// show the peers that have shares to which we have access
// This will require work on the control server to transmit the inverse
// of the "tailscale.com/cap/drive" capability.
// For now, at least limit it only to nodes that are online.
// Note, we have to iterate the latest netmap because the peer we got from the first iteration may not be it
// Peers are available to Taildrive if:
// - They are online
// - They are allowed to share at least one folder with us
b.mu.Lock()
latestNetMap := b.netMap
b.mu.Unlock()
for _, candidate := range latestNetMap.Peers {
if candidate.ID() == peerID {
online := candidate.Online()
// TODO(oxtoacart): for some reason, this correctly
// catches when a node goes from offline to online,
// but not the other way around...
return online != nil && *online
idx, found := slices.BinarySearchFunc(latestNetMap.Peers, peerID, func(candidate tailcfg.NodeView, id tailcfg.NodeID) int {
return cmp.Compare(candidate.ID(), id)
})
if !found {
return false
}
peer := latestNetMap.Peers[idx]
// Exclude offline peers.
// TODO(oxtoacart): for some reason, this correctly
// catches when a node goes from offline to online,
// but not the other way around...
online := peer.Online()
if online == nil || !*online {
return false
}
// Check that the peer is allowed to share with us.
addresses := peer.Addresses()
for i := range addresses.Len() {
addr := addresses.At(i)
capsMap := b.PeerCaps(addr.Addr())
if capsMap.HasCapability(tailcfg.PeerCapabilityTaildriveSharer) {
return true
}
}
// peer not found, must not be available
return false
},
})

@ -59,7 +59,6 @@ import (
"tailscale.com/net/dns"
"tailscale.com/net/dnscache"
"tailscale.com/net/dnsfallback"
"tailscale.com/net/interfaces"
"tailscale.com/net/netcheck"
"tailscale.com/net/netkernelconf"
"tailscale.com/net/netmon"
@ -170,6 +169,7 @@ type LocalBackend struct {
keyLogf logger.Logf // for printing list of peers on change
statsLogf logger.Logf // for printing peers stats on change
sys *tsd.System
health *health.Tracker // always non-nil
e wgengine.Engine // non-nil; TODO(bradfitz): remove; use sys
store ipn.StateStore // non-nil; TODO(bradfitz): remove; use sys
dialer *tsdial.Dialer // non-nil; TODO(bradfitz): remove; use sys
@ -257,7 +257,7 @@ type LocalBackend struct {
authURLTime time.Time // when the authURL was received from the control server
interact bool
egg bool
prevIfState *interfaces.State
prevIfState *netmon.State
peerAPIServer *peerAPIServer // or nil
peerAPIListeners []*peerAPIListener
loginFlags controlclient.LoginFlags
@ -326,6 +326,16 @@ type LocalBackend struct {
outgoingFiles map[string]*ipn.OutgoingFile
}
// HealthTracker returns the health tracker for the backend.
func (b *LocalBackend) HealthTracker() *health.Tracker {
return b.health
}
// NetMon returns the network monitor for the backend.
func (b *LocalBackend) NetMon() *netmon.Monitor {
return b.sys.NetMon.Get()
}
type updateStatus struct {
started bool
}
@ -342,6 +352,12 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
e := sys.Engine.Get()
store := sys.StateStore.Get()
dialer := sys.Dialer.Get()
if dialer == nil {
return nil, errors.New("dialer to NewLocalBackend must be set")
}
if dialer.NetMon() == nil {
return nil, errors.New("dialer to NewLocalBackend must have a NetMon")
}
_ = sys.MagicSock.Get() // or panic
goos := envknob.GOOS()
@ -386,6 +402,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
keyLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now),
statsLogf: logger.LogOnChange(logf, 5*time.Minute, clock.Now),
sys: sys,
health: sys.HealthTracker(),
conf: sys.InitialConfig,
e: e,
dialer: dialer,
@ -403,7 +420,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
}
netMon := sys.NetMon.Get()
b.sockstatLogger, err = sockstatlog.NewLogger(logpolicy.LogsDir(logf), logf, logID, netMon)
b.sockstatLogger, err = sockstatlog.NewLogger(logpolicy.LogsDir(logf), logf, logID, netMon, sys.HealthTracker())
if err != nil {
log.Printf("error setting up sockstat logger: %v", err)
}
@ -426,7 +443,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
b.linkChange(&netmon.ChangeDelta{New: netMon.InterfaceState()})
b.unregisterNetMon = netMon.RegisterChangeCallback(b.linkChange)
b.unregisterHealthWatch = health.RegisterWatcher(b.onHealthChange)
b.unregisterHealthWatch = b.health.RegisterWatcher(b.onHealthChange)
if tunWrap, ok := b.sys.Tun.GetOK(); ok {
tunWrap.PeerAPIPort = b.GetPeerAPIPort
@ -625,7 +642,7 @@ func (b *LocalBackend) linkChange(delta *netmon.ChangeDelta) {
// If the local network configuration has changed, our filter may
// need updating to tweak default routes.
b.updateFilterLocked(b.netMap, b.pm.CurrentPrefs())
updateExitNodeUsageWarning(b.pm.CurrentPrefs(), delta.New)
updateExitNodeUsageWarning(b.pm.CurrentPrefs(), delta.New, b.health)
if peerAPIListenAsync && b.netMap != nil && b.state == ipn.Running {
want := b.netMap.GetAddresses().Len()
@ -761,7 +778,7 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
}
}
}
if err := health.OverallError(); err != nil {
if err := b.health.OverallError(); err != nil {
switch e := err.(type) {
case multierr.Error:
for _, err := range e.Errors() {
@ -820,7 +837,7 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
sb.MutateSelfStatus(func(ss *ipnstate.PeerStatus) {
ss.OS = version.OS()
ss.Online = health.GetInPollNetMap()
ss.Online = b.health.GetInPollNetMap()
if b.netMap != nil {
ss.InNetworkMap = true
if hi := b.netMap.SelfNode.Hostinfo(); hi.Valid() {
@ -1221,7 +1238,7 @@ func (b *LocalBackend) SetControlClientStatus(c controlclient.Client, st control
if st.NetMap != nil {
if envknob.NoLogsNoSupport() && st.NetMap.HasCap(tailcfg.CapabilityDataPlaneAuditLogs) {
msg := "tailnet requires logging to be enabled. Remove --no-logs-no-support from tailscaled command line."
health.SetLocalLogConfigHealth(errors.New(msg))
b.health.SetLocalLogConfigHealth(errors.New(msg))
// Connecting to this tailnet without logging is forbidden; boot us outta here.
b.mu.Lock()
prefs.WantRunning = false
@ -1750,7 +1767,7 @@ func (b *LocalBackend) Start(opts ipn.Options) error {
HTTPTestClient: httpTestClient,
DiscoPublicKey: discoPublic,
DebugFlags: debugFlags,
NetMon: b.sys.NetMon.Get(),
HealthTracker: b.health,
Pinger: b,
PopBrowserURL: b.tellClientToBrowseToURL,
OnClientVersion: b.onClientVersion,
@ -1851,10 +1868,10 @@ func (b *LocalBackend) updateFilterLocked(netMap *netmap.NetworkMap, prefs ipn.P
if packetFilterPermitsUnlockedNodes(b.peers, packetFilter) {
err := errors.New("server sent invalid packet filter permitting traffic to unlocked nodes; rejecting all packets for safety")
warnInvalidUnsignedNodes.Set(err)
b.health.SetWarnable(warnInvalidUnsignedNodes, err)
packetFilter = nil
} else {
warnInvalidUnsignedNodes.Set(nil)
b.health.SetWarnable(warnInvalidUnsignedNodes, nil)
}
}
if prefs.Valid() {
@ -2011,18 +2028,18 @@ var removeFromDefaultRoute = []netip.Prefix{
// Given that "internal" routes don't leave the device, we choose to
// trust them more, allowing access to them when an Exit Node is enabled.
func internalAndExternalInterfaces() (internal, external []netip.Prefix, err error) {
il, err := interfaces.GetList()
il, err := netmon.GetInterfaceList()
if err != nil {
return nil, nil, err
}
return internalAndExternalInterfacesFrom(il, runtime.GOOS)
}
func internalAndExternalInterfacesFrom(il interfaces.List, goos string) (internal, external []netip.Prefix, err error) {
func internalAndExternalInterfacesFrom(il netmon.InterfaceList, goos string) (internal, external []netip.Prefix, err error) {
// We use an IPSetBuilder here to canonicalize the prefixes
// and to remove any duplicate entries.
var internalBuilder, externalBuilder netipx.IPSetBuilder
if err := il.ForeachInterfaceAddress(func(iface interfaces.Interface, pfx netip.Prefix) {
if err := il.ForeachInterfaceAddress(func(iface netmon.Interface, pfx netip.Prefix) {
if tsaddr.IsTailscaleIP(pfx.Addr()) {
return
}
@ -2066,7 +2083,7 @@ func internalAndExternalInterfacesFrom(il interfaces.List, goos string) (interna
func interfaceRoutes() (ips *netipx.IPSet, hostIPs []netip.Addr, err error) {
var b netipx.IPSetBuilder
if err := interfaces.ForeachInterfaceAddress(func(_ interfaces.Interface, pfx netip.Prefix) {
if err := netmon.ForeachInterfaceAddress(func(_ netmon.Interface, pfx netip.Prefix) {
if tsaddr.IsTailscaleIP(pfx.Addr()) {
return
}
@ -2440,9 +2457,12 @@ func (b *LocalBackend) popBrowserAuthNow() {
b.authURL = "" // but NOT clearing authURLSticky
b.mu.Unlock()
b.logf("popBrowserAuthNow: url=%v", url != "")
b.logf("popBrowserAuthNow: url=%v, key-expired=%v, seamless-key-renewal=%v", url != "", b.keyExpired, b.seamlessRenewalEnabled())
if !b.seamlessRenewalEnabled() {
// Deconfigure the local network data plane if:
// - seamless key renewal is not enabled;
// - key is expired (in which case tailnet connectivity is down anyway).
if !b.seamlessRenewalEnabled() || b.keyExpired {
b.blockEngineUpdates(true)
b.stopEngineAndWait()
}
@ -3045,7 +3065,7 @@ var warnExitNodeUsage = health.NewWarnable(health.WithConnectivityImpact())
// updateExitNodeUsageWarning updates a warnable meant to notify users of
// configuration issues that could break exit node usage.
func updateExitNodeUsageWarning(p ipn.PrefsView, state *interfaces.State) {
func updateExitNodeUsageWarning(p ipn.PrefsView, state *netmon.State, health *health.Tracker) {
var result error
if p.ExitNodeIP().IsValid() || p.ExitNodeID() != "" {
warn, _ := netutil.CheckReversePathFiltering(state)
@ -3054,7 +3074,7 @@ func updateExitNodeUsageWarning(p ipn.PrefsView, state *interfaces.State) {
result = fmt.Errorf("%s: %v, %s", healthmsg.WarnExitNodeUsage, warn, comment)
}
}
warnExitNodeUsage.Set(result)
health.SetWarnable(warnExitNodeUsage, result)
}
func (b *LocalBackend) checkExitNodePrefsLocked(p *ipn.Prefs) error {
@ -3118,6 +3138,19 @@ func (b *LocalBackend) SetUseExitNodeEnabled(v bool) (ipn.PrefsView, error) {
return b.editPrefsLockedOnEntry(mp, unlock)
}
// MaybeClearAppConnector clears the routes from any AppConnector if
// AdvertiseRoutes has been set in the MaskedPrefs.
func (b *LocalBackend) MaybeClearAppConnector(mp *ipn.MaskedPrefs) error {
var err error
if b.appConnector != nil && mp.AdvertiseRoutesSet {
err = b.appConnector.ClearRoutes()
if err != nil {
b.logf("appc: clear routes error: %v", err)
}
}
return err
}
func (b *LocalBackend) EditPrefs(mp *ipn.MaskedPrefs) (ipn.PrefsView, error) {
if mp.SetsInternal() {
return ipn.PrefsView{}, errors.New("can't set Internal fields")
@ -3496,8 +3529,22 @@ func (b *LocalBackend) reconfigAppConnectorLocked(nm *netmap.NetworkMap, prefs i
return
}
if b.appConnector == nil {
b.appConnector = appc.NewAppConnector(b.logf, b)
shouldAppCStoreRoutes := b.ControlKnobs().AppCStoreRoutes.Load()
if b.appConnector == nil || b.appConnector.ShouldStoreRoutes() != shouldAppCStoreRoutes {
var ri *appc.RouteInfo
var storeFunc func(*appc.RouteInfo) error
if shouldAppCStoreRoutes {
var err error
ri, err = b.readRouteInfoLocked()
if err != nil {
ri = &appc.RouteInfo{}
if err != ipn.ErrStateNotExist {
b.logf("Unsuccessful Read RouteInfo: ", err)
}
}
storeFunc = b.storeRouteInfo
}
b.appConnector = appc.NewAppConnector(b.logf, b, ri, storeFunc)
}
if nm == nil {
return
@ -3626,7 +3673,7 @@ func shouldUseOneCGNATRoute(logf logger.Logf, controlKnobs *controlknobs.Knobs,
// use fine-grained routes if another interfaces is also using the CGNAT
// IP range.
if versionOS == "macOS" {
hasCGNATInterface, err := interfaces.HasCGNATInterface()
hasCGNATInterface, err := netmon.HasCGNATInterface()
if err != nil {
logf("shouldUseOneCGNATRoute: Could not determine if any interfaces use CGNAT: %v", err)
return false
@ -4251,7 +4298,7 @@ func (b *LocalBackend) enterStateLockedOnEntry(newState ipn.State, unlock unlock
// prefs may change irrespective of state; WantRunning should be explicitly
// set before potential early return even if the state is unchanged.
health.SetIPNState(newState.String(), prefs.Valid() && prefs.WantRunning())
b.health.SetIPNState(newState.String(), prefs.Valid() && prefs.WantRunning())
if oldState == newState {
return
}
@ -4689,9 +4736,9 @@ func (b *LocalBackend) setNetMapLocked(nm *netmap.NetworkMap) {
b.pauseOrResumeControlClientLocked()
if nm != nil {
health.SetControlHealth(nm.ControlHealth)
b.health.SetControlHealth(nm.ControlHealth)
} else {
health.SetControlHealth(nil)
b.health.SetControlHealth(nil)
}
// Determine if file sharing is enabled
@ -5676,9 +5723,9 @@ var warnSSHSELinux = health.NewWarnable()
func (b *LocalBackend) updateSELinuxHealthWarning() {
if hostinfo.IsSELinuxEnforcing() {
warnSSHSELinux.Set(errors.New("SELinux is enabled; Tailscale SSH may not work. See https://tailscale.com/s/ssh-selinux"))
b.health.SetWarnable(warnSSHSELinux, errors.New("SELinux is enabled; Tailscale SSH may not work. See https://tailscale.com/s/ssh-selinux"))
} else {
warnSSHSELinux.Set(nil)
b.health.SetWarnable(warnSSHSELinux, nil)
}
}
@ -5905,7 +5952,7 @@ func (b *LocalBackend) resetForProfileChangeLockedOnEntry(unlock unlockOnce) err
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
b.enterStateLockedOnEntry(ipn.NoState, unlock) // Reset state; releases b.mu
health.SetLocalLogConfigHealth(nil)
b.health.SetLocalLogConfigHealth(nil)
return b.Start(ipn.Options{})
}
@ -6190,6 +6237,43 @@ func (b *LocalBackend) UnadvertiseRoute(toRemove ...netip.Prefix) error {
return err
}
// namespace a key with the profile manager's current profile key, if any
func namespaceKeyForCurrentProfile(pm *profileManager, key ipn.StateKey) ipn.StateKey {
return pm.CurrentProfile().Key + "||" + key
}
const routeInfoStateStoreKey ipn.StateKey = "_routeInfo"
func (b *LocalBackend) storeRouteInfo(ri *appc.RouteInfo) error {
b.mu.Lock()
defer b.mu.Unlock()
if b.pm.CurrentProfile().ID == "" {
return nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := json.Marshal(ri)
if err != nil {
return err
}
return b.pm.WriteState(key, bs)
}
func (b *LocalBackend) readRouteInfoLocked() (*appc.RouteInfo, error) {
if b.pm.CurrentProfile().ID == "" {
return &appc.RouteInfo{}, nil
}
key := namespaceKeyForCurrentProfile(b.pm, routeInfoStateStoreKey)
bs, err := b.pm.Store().ReadState(key)
ri := &appc.RouteInfo{}
if err != nil {
return nil, err
}
if err := json.Unmarshal(bs, ri); err != nil {
return nil, err
}
return ri, nil
}
// seamlessRenewalEnabled reports whether seamless key renewals are enabled
// (i.e. we saw our self node with the SeamlessKeyRenewal attr in a netmap).
// This enables beta functionality of renewing node keys without breaking
@ -6237,6 +6321,7 @@ func mayDeref[T any](p *T) (v T) {
}
var ErrNoPreferredDERP = errors.New("no preferred DERP, try again later")
var ErrCannotSuggestExitNode = errors.New("unable to suggest an exit node, try again later")
// SuggestExitNode computes a suggestion based on the current netmap and last netcheck report. If
// there are multiple equally good options, one is selected at random, so the result is not stable. To be
@ -6250,6 +6335,9 @@ func (b *LocalBackend) SuggestExitNode() (response apitype.ExitNodeSuggestionRes
lastReport := b.MagicConn().GetLastNetcheckReport(b.ctx)
netMap := b.netMap
b.mu.Unlock()
if lastReport == nil || netMap == nil {
return response, ErrCannotSuggestExitNode
}
seed := time.Now().UnixNano()
r := rand.New(rand.NewSource(seed))
return suggestExitNode(lastReport, netMap, r)

@ -15,6 +15,7 @@ import (
"net/netip"
"os"
"reflect"
"runtime"
"slices"
"sync"
"testing"
@ -31,8 +32,8 @@ import (
"tailscale.com/drive/driveimpl"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/net/interfaces"
"tailscale.com/net/netcheck"
"tailscale.com/net/netmon"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/tsd"
@ -54,6 +55,8 @@ import (
"tailscale.com/wgengine/wgcfg"
)
func fakeStoreRoutes(*appc.RouteInfo) error { return nil }
func inRemove(ip netip.Addr) bool {
for _, pfx := range removeFromDefaultRoute {
if pfx.Contains(ip) {
@ -602,7 +605,7 @@ func TestFileTargets(t *testing.T) {
func TestInternalAndExternalInterfaces(t *testing.T) {
type interfacePrefix struct {
i interfaces.Interface
i netmon.Interface
pfx netip.Prefix
}
@ -612,7 +615,7 @@ func TestInternalAndExternalInterfaces(t *testing.T) {
}
return pfxs
}
iList := func(ips ...interfacePrefix) (il interfaces.List) {
iList := func(ips ...interfacePrefix) (il netmon.InterfaceList) {
for _, ip := range ips {
il = append(il, ip.i)
}
@ -620,7 +623,7 @@ func TestInternalAndExternalInterfaces(t *testing.T) {
}
newInterface := func(name, pfx string, wsl2, loopback bool) interfacePrefix {
ippfx := netip.MustParsePrefix(pfx)
ip := interfaces.Interface{
ip := netmon.Interface{
Interface: &net.Interface{},
AltAddrs: []net.Addr{
netipx.PrefixIPNet(ippfx),
@ -644,7 +647,7 @@ func TestInternalAndExternalInterfaces(t *testing.T) {
tests := []struct {
name string
goos string
il interfaces.List
il netmon.InterfaceList
wantInt []netip.Prefix
wantExt []netip.Prefix
}{
@ -1290,13 +1293,19 @@ func TestDNSConfigForNetmapForExitNodeConfigs(t *testing.T) {
}
func TestOfferingAppConnector(t *testing.T) {
b := newTestBackend(t)
if b.OfferingAppConnector() {
t.Fatal("unexpected offering app connector")
}
b.appConnector = appc.NewAppConnector(t.Logf, nil)
if !b.OfferingAppConnector() {
t.Fatal("unexpected not offering app connector")
for _, shouldStore := range []bool{false, true} {
b := newTestBackend(t)
if b.OfferingAppConnector() {
t.Fatal("unexpected offering app connector")
}
if shouldStore {
b.appConnector = appc.NewAppConnector(t.Logf, nil, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
b.appConnector = appc.NewAppConnector(t.Logf, nil, nil, nil)
}
if !b.OfferingAppConnector() {
t.Fatal("unexpected not offering app connector")
}
}
}
@ -1341,21 +1350,27 @@ func TestRouterAdvertiserIgnoresContainedRoutes(t *testing.T) {
}
func TestObserveDNSResponse(t *testing.T) {
b := newTestBackend(t)
for _, shouldStore := range []bool{false, true} {
b := newTestBackend(t)
// ensure no error when no app connector is configured
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
// ensure no error when no app connector is configured
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
rc := &appctest.RouteCollector{}
b.appConnector = appc.NewAppConnector(t.Logf, rc)
b.appConnector.UpdateDomains([]string{"example.com"})
b.appConnector.Wait(context.Background())
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
b.appConnector.Wait(context.Background())
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes)
rc := &appctest.RouteCollector{}
if shouldStore {
b.appConnector = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
b.appConnector = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
b.appConnector.UpdateDomains([]string{"example.com"})
b.appConnector.Wait(context.Background())
b.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
b.appConnector.Wait(context.Background())
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Fatalf("got routes %v, want %v", rc.Routes(), wantRoutes)
}
}
}
@ -1568,6 +1583,11 @@ func (h *errorSyspolicyHandler) ReadBoolean(key string) (bool, error) {
return false, syspolicy.ErrNoSuchKey
}
func (h *errorSyspolicyHandler) ReadStringArray(key string) ([]string, error) {
h.t.Errorf("ReadStringArray(%q) unexpectedly called", key)
return nil, syspolicy.ErrNoSuchKey
}
type mockSyspolicyHandler struct {
t *testing.T
// stringPolicies is the collection of policies that we expect to see
@ -1607,6 +1627,13 @@ func (h *mockSyspolicyHandler) ReadBoolean(key string) (bool, error) {
return false, syspolicy.ErrNoSuchKey
}
func (h *mockSyspolicyHandler) ReadStringArray(key string) ([]string, error) {
if h.failUnknownPolicies {
h.t.Errorf("ReadStringArray(%q) unexpectedly called", key)
}
return nil, syspolicy.ErrNoSuchKey
}
func TestSetExitNodeIDPolicy(t *testing.T) {
pfx := netip.MustParsePrefix
tests := []struct {
@ -2256,8 +2283,10 @@ func TestPreferencePolicyInfo(t *testing.T) {
}
func TestOnTailnetDefaultAutoUpdate(t *testing.T) {
if runtime.GOOS == "darwin" {
t.Skip("test broken on macOS; see https://github.com/tailscale/tailscale/issues/11894")
}
tests := []struct {
desc string
before, after opt.Bool
tailnetDefault bool
}{
@ -2293,7 +2322,7 @@ func TestOnTailnetDefaultAutoUpdate(t *testing.T) {
},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("before=%s after=%s", tt.before, tt.after), func(t *testing.T) {
t.Run(fmt.Sprintf("before=%s,after=%s", tt.before, tt.after), func(t *testing.T) {
b := newTestBackend(t)
p := ipn.NewPrefs()
p.AutoUpdate.Apply = tt.before
@ -3439,3 +3468,66 @@ func TestEnableAutoUpdates(t *testing.T) {
t.Fatalf("disabling auto-updates: got error: %v", err)
}
}
func TestReadWriteRouteInfo(t *testing.T) {
// set up a backend with more than one profile
b := newTestBackend(t)
prof1 := ipn.LoginProfile{ID: "id1", Key: "key1"}
prof2 := ipn.LoginProfile{ID: "id2", Key: "key2"}
b.pm.knownProfiles["id1"] = &prof1
b.pm.knownProfiles["id2"] = &prof2
b.pm.currentProfile = &prof1
// set up routeInfo
ri1 := &appc.RouteInfo{}
ri1.Wildcards = []string{"1"}
ri2 := &appc.RouteInfo{}
ri2.Wildcards = []string{"2"}
// read before write
readRi, err := b.readRouteInfoLocked()
if readRi != nil {
t.Fatalf("read before writing: want nil, got %v", readRi)
}
if err != ipn.ErrStateNotExist {
t.Fatalf("read before writing: want %v, got %v", ipn.ErrStateNotExist, err)
}
// write the first routeInfo
if err := b.storeRouteInfo(ri1); err != nil {
t.Fatal(err)
}
// write the other routeInfo as the other profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
if err := b.storeRouteInfo(ri2); err != nil {
t.Fatal(err)
}
// read the routeInfo of the first profile
if err := b.pm.SwitchProfile("id1"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri1.Wildcards) {
t.Fatalf("read prof1 routeInfo wildcards: want %v, got %v", ri1.Wildcards, readRi.Wildcards)
}
// read the routeInfo of the second profile
if err := b.pm.SwitchProfile("id2"); err != nil {
t.Fatal(err)
}
readRi, err = b.readRouteInfoLocked()
if err != nil {
t.Fatal(err)
}
if !slices.Equal(readRi.Wildcards, ri2.Wildcards) {
t.Fatalf("read prof2 routeInfo wildcards: want %v, got %v", ri2.Wildcards, readRi.Wildcards)
}
}

@ -20,7 +20,6 @@ import (
"path/filepath"
"time"
"tailscale.com/health"
"tailscale.com/health/healthmsg"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@ -59,11 +58,11 @@ type tkaState struct {
// b.mu must be held.
func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
if b.tka == nil && !b.capTailnetLock {
health.SetTKAHealth(nil)
b.health.SetTKAHealth(nil)
return
}
if b.tka == nil {
health.SetTKAHealth(nil)
b.health.SetTKAHealth(nil)
return // TKA not enabled.
}
@ -117,9 +116,9 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
// Check that we ourselves are not locked out, report a health issue if so.
if nm.SelfNode.Valid() && b.tka.authority.NodeKeyAuthorized(nm.SelfNode.Key(), nm.SelfNode.KeySignature().AsSlice()) != nil {
health.SetTKAHealth(errors.New(healthmsg.LockedOut))
b.health.SetTKAHealth(errors.New(healthmsg.LockedOut))
} else {
health.SetTKAHealth(nil)
b.health.SetTKAHealth(nil)
}
}
@ -188,7 +187,7 @@ func (b *LocalBackend) tkaSyncIfNeeded(nm *netmap.NetworkMap, prefs ipn.PrefsVie
b.logf("Disablement failed, leaving TKA enabled. Error: %v", err)
} else {
isEnabled = false
health.SetTKAHealth(nil)
b.health.SetTKAHealth(nil)
}
} else {
return fmt.Errorf("[bug] unreachable invariant of wantEnabled w/ isEnabled")

@ -20,6 +20,8 @@ import (
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
"tailscale.com/tailcfg"
"tailscale.com/tka"
"tailscale.com/types/key"
@ -50,6 +52,7 @@ func fakeControlClient(t *testing.T, c *http.Client) *controlclient.Auto {
HTTPTestClient: c,
NoiseTestClient: c,
Observer: observerFunc(func(controlclient.Status) {}),
Dialer: tsdial.NewDialer(netmon.NewStatic()),
}
cc, err := controlclient.NewNoStart(opts)

@ -34,8 +34,8 @@ import (
"tailscale.com/health"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/net/interfaces"
"tailscale.com/net/netaddr"
"tailscale.com/net/netmon"
"tailscale.com/net/netutil"
"tailscale.com/net/sockstats"
"tailscale.com/tailcfg"
@ -51,7 +51,7 @@ const (
taildrivePrefix = "/v0/drive"
)
var initListenConfig func(*net.ListenConfig, netip.Addr, *interfaces.State, string) error
var initListenConfig func(*net.ListenConfig, netip.Addr, *netmon.State, string) error
// addH2C is non-nil on platforms where we want to add H2C
// ("cleartext" HTTP/2) support to the peerAPI.
@ -69,7 +69,7 @@ type peerAPIServer struct {
taildrop *taildrop.Manager
}
func (s *peerAPIServer) listen(ip netip.Addr, ifState *interfaces.State) (ln net.Listener, err error) {
func (s *peerAPIServer) listen(ip netip.Addr, ifState *netmon.State) (ln net.Listener, err error) {
// Android for whatever reason often has problems creating the peerapi listener.
// But since we started intercepting it with netstack, it's not even important that
// we have a real kernel-level listener. So just create a dummy listener on Android
@ -444,19 +444,19 @@ func (h *peerAPIHandler) handleServeInterfaces(w http.ResponseWriter, r *http.Re
w.Header().Set("Content-Type", "text/html; charset=utf-8")
fmt.Fprintln(w, "<h1>Interfaces</h1>")
if dr, err := interfaces.DefaultRoute(); err == nil {
if dr, err := netmon.DefaultRoute(); err == nil {
fmt.Fprintf(w, "<h3>Default route is %q(%d)</h3>\n", html.EscapeString(dr.InterfaceName), dr.InterfaceIndex)
} else {
fmt.Fprintf(w, "<h3>Could not get the default route: %s</h3>\n", html.EscapeString(err.Error()))
}
if hasCGNATInterface, err := interfaces.HasCGNATInterface(); hasCGNATInterface {
if hasCGNATInterface, err := netmon.HasCGNATInterface(); hasCGNATInterface {
fmt.Fprintln(w, "<p>There is another interface using the CGNAT range.</p>")
} else if err != nil {
fmt.Fprintf(w, "<p>Could not check for CGNAT interfaces: %s</p>\n", html.EscapeString(err.Error()))
}
i, err := interfaces.GetList()
i, err := netmon.GetInterfaceList()
if err != nil {
fmt.Fprintf(w, "Could not get interfaces: %s\n", html.EscapeString(err.Error()))
return
@ -468,12 +468,12 @@ func (h *peerAPIHandler) handleServeInterfaces(w http.ResponseWriter, r *http.Re
fmt.Fprintf(w, "<th>%v</th> ", v)
}
fmt.Fprint(w, "</tr>\n")
i.ForeachInterface(func(iface interfaces.Interface, ipps []netip.Prefix) {
i.ForeachInterface(func(iface netmon.Interface, ipps []netip.Prefix) {
fmt.Fprint(w, "<tr>")
for _, v := range []any{iface.Index, iface.Name, iface.MTU, iface.Flags, ipps} {
fmt.Fprintf(w, "<td>%s</td> ", html.EscapeString(fmt.Sprintf("%v", v)))
}
if extras, err := interfaces.InterfaceDebugExtras(iface.Index); err == nil && extras != "" {
if extras, err := netmon.InterfaceDebugExtras(iface.Index); err == nil && extras != "" {
fmt.Fprintf(w, "<td>%s</td> ", html.EscapeString(extras))
} else if err != nil {
fmt.Fprintf(w, "<td>%s</td> ", html.EscapeString(err.Error()))

@ -10,7 +10,7 @@ import (
"net"
"net/netip"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/netns"
)
@ -21,7 +21,7 @@ func init() {
// initListenConfigNetworkExtension configures nc for listening on IP
// through the iOS/macOS Network/System Extension (Packet Tunnel
// Provider) sandbox.
func initListenConfigNetworkExtension(nc *net.ListenConfig, ip netip.Addr, st *interfaces.State, tunIfName string) error {
func initListenConfigNetworkExtension(nc *net.ListenConfig, ip netip.Addr, st *netmon.State, tunIfName string) error {
tunIf, ok := st.Interface[tunIfName]
if !ok {
return fmt.Errorf("no interface with name %q", tunIfName)

@ -687,185 +687,209 @@ func TestPeerAPIReplyToDNSQueries(t *testing.T) {
}
func TestPeerAPIPrettyReplyCNAME(t *testing.T) {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
// configure as an app connector just to enable the API.
appConnector: appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}),
},
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
for _, shouldStore := range []bool{false, true} {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
// configure as an app connector just to enable the API.
appConnector: a,
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
var addrs []string
json.NewDecoder(w.Body).Decode(&addrs)
if len(addrs) == 0 {
t.Fatalf("no addresses returned")
}
for _, addr := range addrs {
netip.MustParseAddr(addr)
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
var addrs []string
json.NewDecoder(w.Body).Decode(&addrs)
if len(addrs) == 0 {
t.Fatalf("no addresses returned")
}
for _, addr := range addrs {
netip.MustParseAddr(addr)
}
}
}
func TestPeerAPIReplyToDNSQueriesAreObserved(t *testing.T) {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: appc.NewAppConnector(t.Logf, rc),
},
}
h.ps.b.appConnector.UpdateDomains([]string{"example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: a,
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
}
h.ps.b.appConnector.UpdateDomains([]string{"example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
}
}
func TestPeerAPIReplyToDNSQueriesAreObservedWithCNAMEFlattening(t *testing.T) {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: appc.NewAppConnector(t.Logf, rc),
},
}
h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
var a *appc.AppConnector
if shouldStore {
a = appc.NewAppConnector(t.Logf, rc, &appc.RouteInfo{}, fakeStoreRoutes)
} else {
a = appc.NewAppConnector(t.Logf, rc, nil, nil)
}
h.ps = &peerAPIServer{
b: &LocalBackend{
e: eng,
pm: pm,
store: pm.Store(),
appConnector: a,
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
}
h.ps.b.appConnector.UpdateDomains([]string{"www.example.com"})
h.ps.b.appConnector.Wait(ctx)
h.ps.resolver = &fakeResolver{build: func(b *dnsmessage.Builder) {
b.CNAMEResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("www.example.com."),
Type: dnsmessage.TypeCNAME,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.CNAMEResource{
CNAME: dnsmessage.MustNewName("example.com."),
},
)
b.AResource(
dnsmessage.ResourceHeader{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
TTL: 0,
},
dnsmessage.AResource{
A: [4]byte{192, 0, 0, 8},
},
)
}}
f := filter.NewAllowAllForTest(logger.Discard)
h.ps.b.setFilter(f)
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
if !h.ps.b.OfferingAppConnector() {
t.Fatal("expecting to be offering app connector")
}
if !h.replyToDNSQueries() {
t.Errorf("unexpectedly deny; wanted to be a DNS server")
}
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
w := httptest.NewRecorder()
h.handleDNSQuery(w, httptest.NewRequest("GET", "/dns-query?q=www.example.com.", nil))
if w.Code != http.StatusOK {
t.Errorf("unexpected status code: %v", w.Code)
}
h.ps.b.appConnector.Wait(ctx)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("got %v; want %v", rc.Routes(), wantRoutes)
}
}
}

@ -28,6 +28,7 @@ import (
"tailscale.com/ipn/store/mem"
"tailscale.com/tailcfg"
"tailscale.com/tsd"
"tailscale.com/types/logger"
"tailscale.com/types/logid"
"tailscale.com/types/netmap"
"tailscale.com/util/mak"
@ -663,14 +664,21 @@ func mustCreateURL(t *testing.T, u string) url.URL {
}
func newTestBackend(t *testing.T) *LocalBackend {
var logf logger.Logf = logger.Discard
const debug = true
if debug {
logf = logger.WithPrefix(t.Logf, "... ")
}
sys := &tsd.System{}
e, err := wgengine.NewUserspaceEngine(t.Logf, wgengine.Config{SetSubsystem: sys.Set})
e, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{SetSubsystem: sys.Set})
if err != nil {
t.Fatal(err)
}
sys.Set(e)
sys.Set(new(mem.Store))
b, err := NewLocalBackend(t.Logf, logid.PublicID{}, sys, 0)
b, err := NewLocalBackend(logf, logid.PublicID{}, sys, 0)
if err != nil {
t.Fatal(err)
}
@ -678,7 +686,7 @@ func newTestBackend(t *testing.T) *LocalBackend {
dir := t.TempDir()
b.SetVarRoot(dir)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf))
pm := must.Get(newProfileManager(new(mem.Store), logf))
pm.currentProfile = &ipn.LoginProfile{ID: "id0"}
b.pm = pm

@ -199,7 +199,7 @@ func (s *Server) serveHTTP(w http.ResponseWriter, r *http.Request) {
defer onDone()
if strings.HasPrefix(r.URL.Path, "/localapi/") {
lah := localapi.NewHandler(lb, s.logf, s.netMon, s.backendLogID)
lah := localapi.NewHandler(lb, s.logf, s.backendLogID)
lah.PermitRead, lah.PermitWrite = s.localAPIPermissions(ci)
lah.PermitCert = s.connCanFetchCerts(ci)
lah.ConnIdentity = ci

@ -140,7 +140,7 @@ func (h *Handler) serveDebugDERPRegion(w http.ResponseWriter, r *http.Request) {
}
checkSTUN4 := func(derpNode *tailcfg.DERPNode) {
u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(h.logf, h.netMon)).ListenPacket(ctx, "udp4", ":0")
u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(h.logf, h.b.NetMon())).ListenPacket(ctx, "udp4", ":0")
if err != nil {
st.Errors = append(st.Errors, fmt.Sprintf("Error creating IPv4 STUN listener: %v", err))
return
@ -249,7 +249,7 @@ func (h *Handler) serveDebugDERPRegion(w http.ResponseWriter, r *http.Request) {
serverPubKeys := make(map[key.NodePublic]bool)
for i := range 5 {
func() {
rc := derphttp.NewRegionClient(fakePrivKey, h.logf, h.netMon, func() *tailcfg.DERPRegion {
rc := derphttp.NewRegionClient(fakePrivKey, h.logf, h.b.NetMon(), func() *tailcfg.DERPRegion {
return &tailcfg.DERPRegion{
RegionID: reg.RegionID,
RegionCode: reg.RegionCode,

@ -36,7 +36,6 @@ import (
"tailscale.com/clientupdate"
"tailscale.com/drive"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnauth"
@ -156,8 +155,8 @@ var (
// NewHandler creates a new LocalAPI HTTP handler. All parameters except netMon
// are required (if non-nil it's used to do faster interface lookups).
func NewHandler(b *ipnlocal.LocalBackend, logf logger.Logf, netMon *netmon.Monitor, logID logid.PublicID) *Handler {
return &Handler{b: b, logf: logf, netMon: netMon, backendLogID: logID, clock: tstime.StdClock{}}
func NewHandler(b *ipnlocal.LocalBackend, logf logger.Logf, logID logid.PublicID) *Handler {
return &Handler{b: b, logf: logf, backendLogID: logID, clock: tstime.StdClock{}}
}
type Handler struct {
@ -188,7 +187,6 @@ type Handler struct {
b *ipnlocal.LocalBackend
logf logger.Logf
netMon *netmon.Monitor // optional; nil means interfaces will be looked up on-demand
backendLogID logid.PublicID
clock tstime.Clock
}
@ -358,7 +356,7 @@ func (h *Handler) serveBugReport(w http.ResponseWriter, r *http.Request) {
}
hi, _ := json.Marshal(hostinfo.New())
h.logf("user bugreport hostinfo: %s", hi)
if err := health.OverallError(); err != nil {
if err := h.b.HealthTracker().OverallError(); err != nil {
h.logf("user bugreport health: %s", err.Error())
} else {
h.logf("user bugreport health: ok")
@ -748,7 +746,7 @@ func (h *Handler) serveDebugPortmap(w http.ResponseWriter, r *http.Request) {
done := make(chan bool, 1)
var c *portmapper.Client
c = portmapper.NewClient(logger.WithPrefix(logf, "portmapper: "), h.netMon, debugKnobs, h.b.ControlKnobs(), func() {
c = portmapper.NewClient(logger.WithPrefix(logf, "portmapper: "), h.b.NetMon(), debugKnobs, h.b.ControlKnobs(), func() {
logf("portmapping changed.")
logf("have mapping: %v", c.HaveMapping())
@ -1368,6 +1366,12 @@ func (h *Handler) servePrefs(w http.ResponseWriter, r *http.Request) {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if err := h.b.MaybeClearAppConnector(mp); err != nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusInternalServerError)
json.NewEncoder(w).Encode(resJSON{Error: err.Error()})
return
}
var err error
prefs, err = h.b.EditPrefs(mp)
if err != nil {

@ -7,6 +7,7 @@ package kubestore
import (
"context"
"fmt"
"net"
"strings"
"time"
@ -18,7 +19,7 @@ import (
// Store is an ipn.StateStore that uses a Kubernetes Secret for persistence.
type Store struct {
client *kube.Client
client kube.Client
canPatch bool
secretName string
}
@ -29,7 +30,7 @@ func New(_ logger.Logf, secretName string) (*Store, error) {
if err != nil {
return nil, err
}
canPatch, err := c.CheckSecretPermissions(context.Background(), secretName)
canPatch, _, err := c.CheckSecretPermissions(context.Background(), secretName)
if err != nil {
return nil, err
}
@ -83,7 +84,7 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
secret, err := s.client.GetSecret(ctx, s.secretName)
if err != nil {
if st, ok := err.(*kube.Status); ok && st.Code == 404 {
if kube.IsNotFoundErr(err) {
return s.client.CreateSecret(ctx, &kube.Secret{
TypeMeta: kube.TypeMeta{
APIVersion: "v1",
@ -100,6 +101,19 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
return err
}
if s.canPatch {
if len(secret.Data) == 0 { // if user has pre-created a blank Secret
m := []kube.JSONPatch{
{
Op: "add",
Path: "/data",
Value: map[string][]byte{sanitizeKey(id): bs},
},
}
if err := s.client.JSONPatchSecret(ctx, s.secretName, m); err != nil {
return fmt.Errorf("error patching Secret %s with a /data field: %v", s.secretName, err)
}
return nil
}
m := []kube.JSONPatch{
{
Op: "add",
@ -108,7 +122,7 @@ func (s *Store) WriteState(id ipn.StateKey, bs []byte) error {
},
}
if err := s.client.JSONPatchSecret(ctx, s.secretName, m); err != nil {
return err
return fmt.Errorf("error patching Secret %s with /data/%s field", s.secretName, sanitizeKey(id))
}
return nil
}

File diff suppressed because it is too large Load Diff

@ -52,7 +52,14 @@ type ProxyClassSpec struct {
// Configuration parameters for the proxy's StatefulSet. Tailscale
// Kubernetes operator deploys a StatefulSet for each of the user
// configured proxies (Tailscale Ingress, Tailscale Service, Connector).
// +optional
StatefulSet *StatefulSet `json:"statefulSet"`
// Configuration for proxy metrics. Metrics are currently not supported
// for egress proxies and for Ingress proxies that have been configured
// with tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation.
// +optional
Metrics *Metrics `json:"metrics,omitempty"`
}
type StatefulSet struct {
@ -94,6 +101,11 @@ type Pod struct {
// https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set
// +optional
Annotations map[string]string `json:"annotations,omitempty"`
// Proxy Pod's affinity rules.
// By default, the Tailscale Kubernetes operator does not apply any affinity rules.
// https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#affinity
// +optional
Affinity *corev1.Affinity `json:"affinity,omitempty"`
// Configuration for the proxy container running tailscale.
// +optional
TailscaleContainer *Container `json:"tailscaleContainer,omitempty"`
@ -126,6 +138,14 @@ type Pod struct {
// https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
}
type Metrics struct {
// Setting enable to true will make the proxy serve Tailscale metrics
// at <pod-ip>:9001/debug/metrics.
// Defaults to false.
Enable bool `json:"enable"`
}
type Container struct {

@ -178,6 +178,21 @@ func (in *Env) DeepCopy() *Env {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Metrics) DeepCopyInto(out *Metrics) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Metrics.
func (in *Metrics) DeepCopy() *Metrics {
if in == nil {
return nil
}
out := new(Metrics)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Pod) DeepCopyInto(out *Pod) {
*out = *in
@ -195,6 +210,11 @@ func (in *Pod) DeepCopyInto(out *Pod) {
(*out)[key] = val
}
}
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(v1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.TailscaleContainer != nil {
in, out := &in.TailscaleContainer, &out.TailscaleContainer
*out = new(Container)
@ -308,6 +328,11 @@ func (in *ProxyClassSpec) DeepCopyInto(out *ProxyClassSpec) {
*out = new(StatefulSet)
(*in).DeepCopyInto(*out)
}
if in.Metrics != nil {
in, out := &in.Metrics, &out.Metrics
*out = new(Metrics)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyClassSpec.

@ -49,7 +49,18 @@ func readFile(n string) ([]byte, error) {
// Client handles connections to Kubernetes.
// It expects to be run inside a cluster.
type Client struct {
type Client interface {
GetSecret(context.Context, string) (*Secret, error)
UpdateSecret(context.Context, *Secret) error
CreateSecret(context.Context, *Secret) error
StrategicMergePatchSecret(context.Context, string, *Secret, string) error
JSONPatchSecret(context.Context, string, []JSONPatch) error
CheckSecretPermissions(context.Context, string) (bool, bool, error)
SetDialer(dialer func(context.Context, string, string) (net.Conn, error))
SetURL(string)
}
type client struct {
mu sync.Mutex
url string
ns string
@ -59,7 +70,7 @@ type Client struct {
}
// New returns a new client
func New() (*Client, error) {
func New() (Client, error) {
ns, err := readFile("namespace")
if err != nil {
return nil, err
@ -72,7 +83,7 @@ func New() (*Client, error) {
if ok := cp.AppendCertsFromPEM(caCert); !ok {
return nil, fmt.Errorf("kube: error in creating root cert pool")
}
return &Client{
return &client{
url: defaultURL,
ns: string(ns),
client: &http.Client{
@ -87,23 +98,23 @@ func New() (*Client, error) {
// SetURL sets the URL to use for the Kubernetes API.
// This is used only for testing.
func (c *Client) SetURL(url string) {
func (c *client) SetURL(url string) {
c.url = url
}
// SetDialer sets the dialer to use when establishing a connection
// to the Kubernetes API server.
func (c *Client) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
func (c *client) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
c.client.Transport.(*http.Transport).DialContext = dialer
}
func (c *Client) expireToken() {
func (c *client) expireToken() {
c.mu.Lock()
defer c.mu.Unlock()
c.tokenExpiry = time.Now()
}
func (c *Client) getOrRenewToken() (string, error) {
func (c *client) getOrRenewToken() (string, error) {
c.mu.Lock()
defer c.mu.Unlock()
tk, te := c.token, c.tokenExpiry
@ -120,7 +131,7 @@ func (c *Client) getOrRenewToken() (string, error) {
return c.token, nil
}
func (c *Client) secretURL(name string) string {
func (c *client) secretURL(name string) string {
if name == "" {
return fmt.Sprintf("%s/api/v1/namespaces/%s/secrets", c.url, c.ns)
}
@ -153,7 +164,7 @@ func setHeader(key, value string) func(*http.Request) {
// decoded from JSON.
// If the request fails with a 401, the token is expired and a new one is
// requested.
func (c *Client) doRequest(ctx context.Context, method, url string, in, out any, opts ...func(*http.Request)) error {
func (c *client) doRequest(ctx context.Context, method, url string, in, out any, opts ...func(*http.Request)) error {
req, err := c.newRequest(ctx, method, url, in)
if err != nil {
return err
@ -178,7 +189,7 @@ func (c *Client) doRequest(ctx context.Context, method, url string, in, out any,
return nil
}
func (c *Client) newRequest(ctx context.Context, method, url string, in any) (*http.Request, error) {
func (c *client) newRequest(ctx context.Context, method, url string, in any) (*http.Request, error) {
tk, err := c.getOrRenewToken()
if err != nil {
return nil, err
@ -209,7 +220,7 @@ func (c *Client) newRequest(ctx context.Context, method, url string, in any) (*h
}
// GetSecret fetches the secret from the Kubernetes API.
func (c *Client) GetSecret(ctx context.Context, name string) (*Secret, error) {
func (c *client) GetSecret(ctx context.Context, name string) (*Secret, error) {
s := &Secret{Data: make(map[string][]byte)}
if err := c.doRequest(ctx, "GET", c.secretURL(name), nil, s); err != nil {
return nil, err
@ -218,18 +229,18 @@ func (c *Client) GetSecret(ctx context.Context, name string) (*Secret, error) {
}
// CreateSecret creates a secret in the Kubernetes API.
func (c *Client) CreateSecret(ctx context.Context, s *Secret) error {
func (c *client) CreateSecret(ctx context.Context, s *Secret) error {
s.Namespace = c.ns
return c.doRequest(ctx, "POST", c.secretURL(""), s, nil)
}
// UpdateSecret updates a secret in the Kubernetes API.
func (c *Client) UpdateSecret(ctx context.Context, s *Secret) error {
func (c *client) UpdateSecret(ctx context.Context, s *Secret) error {
return c.doRequest(ctx, "PUT", c.secretURL(s.Name), s, nil)
}
// JSONPatch is a JSON patch operation.
// It currently (2023-03-02) only supports the "remove" operation.
// It currently (2023-03-02) only supports "add" and "remove" operations.
//
// https://tools.ietf.org/html/rfc6902
type JSONPatch struct {
@ -239,8 +250,8 @@ type JSONPatch struct {
}
// JSONPatchSecret updates a secret in the Kubernetes API using a JSON patch.
// It currently (2023-03-02) only supports the "remove" operation.
func (c *Client) JSONPatchSecret(ctx context.Context, name string, patch []JSONPatch) error {
// It currently (2023-03-02) only supports "add" and "remove" operations.
func (c *client) JSONPatchSecret(ctx context.Context, name string, patch []JSONPatch) error {
for _, p := range patch {
if p.Op != "remove" && p.Op != "add" {
panic(fmt.Errorf("unsupported JSON patch operation: %q", p.Op))
@ -252,7 +263,7 @@ func (c *Client) JSONPatchSecret(ctx context.Context, name string, patch []JSONP
// StrategicMergePatchSecret updates a secret in the Kubernetes API using a
// strategic merge patch.
// If a fieldManager is provided, it will be used to track the patch.
func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *Secret, fieldManager string) error {
func (c *client) StrategicMergePatchSecret(ctx context.Context, name string, s *Secret, fieldManager string) error {
surl := c.secretURL(name)
if fieldManager != "" {
uv := url.Values{
@ -267,7 +278,7 @@ func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *
// CheckSecretPermissions checks the secret access permissions of the current
// pod. It returns an error if the basic permissions tailscale needs are
// missing, and reports whether the patch permission is additionally present.
// missing, and reports whether the patch and create permissions are additionally present.
//
// Errors encountered during the access checking process are logged, but ignored
// so that the pod tries to fail alive if the permissions exist and there's just
@ -275,7 +286,7 @@ func (c *Client) StrategicMergePatchSecret(ctx context.Context, name string, s *
// should always be able to use SSARs to assess their own permissions, but since
// we didn't use to check permissions this way we'll be cautious in case some
// old version of k8s deviates from the current behavior.
func (c *Client) CheckSecretPermissions(ctx context.Context, secretName string) (canPatch bool, err error) {
func (c *client) CheckSecretPermissions(ctx context.Context, secretName string) (canPatch, canCreate bool, err error) {
var errs []error
for _, verb := range []string{"get", "update"} {
ok, err := c.checkPermission(ctx, verb, secretName)
@ -286,19 +297,24 @@ func (c *Client) CheckSecretPermissions(ctx context.Context, secretName string)
}
}
if len(errs) > 0 {
return false, multierr.New(errs...)
return false, false, multierr.New(errs...)
}
ok, err := c.checkPermission(ctx, "patch", secretName)
canPatch, err = c.checkPermission(ctx, "patch", secretName)
if err != nil {
log.Printf("error checking patch permission on secret %s: %v", secretName, err)
return false, nil
return false, false, nil
}
canCreate, err = c.checkPermission(ctx, "create", secretName)
if err != nil {
log.Printf("error checking create permission on secret %s: %v", secretName, err)
return false, false, nil
}
return ok, nil
return canPatch, canCreate, nil
}
// checkPermission reports whether the current pod has permission to use the
// given verb (e.g. get, update, patch) on secretName.
func (c *Client) checkPermission(ctx context.Context, verb, secretName string) (bool, error) {
// given verb (e.g. get, update, patch, create) on secretName.
func (c *client) checkPermission(ctx context.Context, verb, secretName string) (bool, error) {
sar := map[string]any{
"apiVersion": "authorization.k8s.io/v1",
"kind": "SelfSubjectAccessReview",
@ -322,3 +338,10 @@ func (c *Client) checkPermission(ctx context.Context, verb, secretName string) (
}
return res.Status.Allowed, nil
}
func IsNotFoundErr(err error) bool {
if st, ok := err.(*Status); ok && st.Code == 404 {
return true
}
return false
}

@ -0,0 +1,37 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Package kube provides a client to interact with Kubernetes.
// This package is Tailscale-internal and not meant for external consumption.
// Further, the API should not be considered stable.
package kube
import (
"context"
"net"
)
var _ Client = &FakeClient{}
type FakeClient struct {
GetSecretImpl func(context.Context, string) (*Secret, error)
CheckSecretPermissionsImpl func(ctx context.Context, name string) (bool, bool, error)
}
func (fc *FakeClient) CheckSecretPermissions(ctx context.Context, name string) (bool, bool, error) {
return fc.CheckSecretPermissionsImpl(ctx, name)
}
func (fc *FakeClient) GetSecret(ctx context.Context, name string) (*Secret, error) {
return fc.GetSecretImpl(ctx, name)
}
func (fc *FakeClient) SetURL(_ string) {}
func (fc *FakeClient) SetDialer(dialer func(ctx context.Context, network, addr string) (net.Conn, error)) {
}
func (fc *FakeClient) StrategicMergePatchSecret(context.Context, string, *Secret, string) error {
return nil
}
func (fc *FakeClient) JSONPatchSecret(context.Context, string, []JSONPatch) error {
return nil
}
func (fc *FakeClient) UpdateSecret(context.Context, *Secret) error { return nil }
func (fc *FakeClient) CreateSecret(context.Context, *Secret) error { return nil }

@ -17,6 +17,7 @@ import (
"sync/atomic"
"time"
"tailscale.com/health"
"tailscale.com/logpolicy"
"tailscale.com/logtail"
"tailscale.com/logtail/filch"
@ -92,11 +93,16 @@ func SockstatLogID(logID logid.PublicID) logid.PrivateID {
// On platforms that do not support sockstat logging, a nil Logger will be returned.
// The returned Logger is not yet enabled, and must be shut down with Shutdown when it is no longer needed.
// Logs will be uploaded to the log server using a new log ID derived from the provided backend logID.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *netmon.Monitor) (*Logger, error) {
//
// The netMon parameter is optional. It should be specified in environments where
// Tailscaled is manipulating the routing table.
func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *netmon.Monitor, health *health.Tracker) (*Logger, error) {
if !sockstats.IsAvailable {
return nil, nil
}
if netMon == nil {
netMon = netmon.NewStatic()
}
if err := os.MkdirAll(logdir, 0755); err != nil && !os.IsExist(err) {
return nil, err
@ -113,7 +119,7 @@ func NewLogger(logdir string, logf logger.Logf, logID logid.PublicID, netMon *ne
logger := &Logger{
logf: logf,
filch: filch,
tr: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, logf),
tr: logpolicy.NewLogtailTransport(logtail.DefaultHost, netMon, health, logf),
}
logger.logger = logtail.NewLogger(logtail.Config{
BaseURL: logpolicy.LogURL(),

@ -23,7 +23,7 @@ func TestResourceCleanup(t *testing.T) {
if err != nil {
t.Fatal(err)
}
lg, err := NewLogger(td, logger.Discard, id.Public(), nil)
lg, err := NewLogger(td, logger.Discard, id.Public(), nil, nil)
if err != nil {
t.Fatal(err)
}

@ -30,6 +30,7 @@ import (
"golang.org/x/term"
"tailscale.com/atomicfile"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/log/filelogger"
"tailscale.com/logtail"
"tailscale.com/logtail/filch"
@ -446,19 +447,22 @@ func tryFixLogStateLocation(dir, cmdname string, logf logger.Logf) {
// New returns a new log policy (a logger and its instance ID) for a given
// collection name.
//
// The netMon parameter is optional; if non-nil it's used to do faster
// interface lookups.
// The netMon parameter is optional. It should be specified in environments where
// Tailscaled is manipulating the routing table.
//
// The logf parameter is optional; if non-nil, information logs (e.g. when
// migrating state) are sent to that logger, and global changes to the log
// package are avoided. If nil, logs will be printed using log.Printf.
func New(collection string, netMon *netmon.Monitor, logf logger.Logf) *Policy {
return NewWithConfigPath(collection, "", "", netMon, logf)
func New(collection string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) *Policy {
return NewWithConfigPath(collection, "", "", netMon, health, logf)
}
// NewWithConfigPath is identical to New, but uses the specified directory and
// command name. If either is empty, it derives them automatically.
func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor, logf logger.Logf) *Policy {
//
// The netMon parameter is optional. It should be specified in environments where
// Tailscaled is manipulating the routing table.
func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) *Policy {
var lflags int
if term.IsTerminal(2) || runtime.GOOS == "windows" {
lflags = 0
@ -554,7 +558,7 @@ func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor,
PrivateID: newc.PrivateID,
Stderr: logWriter{console},
CompressLogs: true,
HTTPC: &http.Client{Transport: NewLogtailTransport(logtail.DefaultHost, netMon, logf)},
HTTPC: &http.Client{Transport: NewLogtailTransport(logtail.DefaultHost, netMon, health, logf)},
}
if collection == logtail.CollectionNode {
conf.MetricsDelta = clientmetric.EncodeLogTailMetricsDelta
@ -569,7 +573,7 @@ func NewWithConfigPath(collection, dir, cmdName string, netMon *netmon.Monitor,
logf("You have enabled a non-default log target. Doing without being told to by Tailscale staff or your network administrator will make getting support difficult.")
conf.BaseURL = val
u, _ := url.Parse(val)
conf.HTTPC = &http.Client{Transport: NewLogtailTransport(u.Host, netMon, logf)}
conf.HTTPC = &http.Client{Transport: NewLogtailTransport(u.Host, netMon, health, logf)}
}
filchOptions := filch.Options{
@ -680,8 +684,12 @@ func (p *Policy) Shutdown(ctx context.Context) error {
// - If TLS connection fails, try again using LetsEncrypt's built-in root certificate,
// for the benefit of older OS platforms which might not include it.
//
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
// The netMon parameter is optional. It should be specified in environments where
// Tailscaled is manipulating the routing table.
func MakeDialFunc(netMon *netmon.Monitor, logf logger.Logf) func(ctx context.Context, netw, addr string) (net.Conn, error) {
if netMon == nil {
netMon = netmon.NewStatic()
}
return func(ctx context.Context, netw, addr string) (net.Conn, error) {
return dialContext(ctx, netw, addr, netMon, logf)
}
@ -724,7 +732,6 @@ func dialContext(ctx context.Context, netw, addr string, netMon *netmon.Monitor,
Forward: dnscache.Get().Forward, // use default cache's forwarder
UseLastGood: true,
LookupIPFallback: dnsfallback.MakeLookupFunc(logf, netMon),
NetMon: netMon,
}
dialer := dnscache.Dialer(nd.DialContext, dnsCache)
c, err = dialer(ctx, netw, addr)
@ -737,14 +744,18 @@ func dialContext(ctx context.Context, netw, addr string, netMon *netmon.Monitor,
// NewLogtailTransport returns an HTTP Transport particularly suited to uploading
// logs to the given host name. See DialContext for details on how it works.
//
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
// The netMon parameter is optional. It should be specified in environments where
// Tailscaled is manipulating the routing table.
//
// The logf parameter is optional; if non-nil, logs are printed using the
// provided function; if nil, log.Printf will be used instead.
func NewLogtailTransport(host string, netMon *netmon.Monitor, logf logger.Logf) http.RoundTripper {
func NewLogtailTransport(host string, netMon *netmon.Monitor, health *health.Tracker, logf logger.Logf) http.RoundTripper {
if testenv.InTest() {
return noopPretendSuccessTransport{}
}
if netMon == nil {
netMon = netmon.NewStatic()
}
// Start with a copy of http.DefaultTransport and tweak it a bit.
tr := http.DefaultTransport.(*http.Transport).Clone()
@ -782,7 +793,7 @@ func NewLogtailTransport(host string, netMon *netmon.Monitor, logf logger.Logf)
tr.TLSNextProto = map[string]func(authority string, c *tls.Conn) http.RoundTripper{}
}
tr.TLSClientConfig = tlsdial.Config(host, tr.TLSClientConfig)
tr.TLSClientConfig = tlsdial.Config(host, health, tr.TLSClientConfig)
return tr
}

@ -232,7 +232,7 @@ func (l *Logger) SetVerbosityLevel(level int) {
atomic.StoreInt64(&l.stderrLevel, int64(level))
}
// SetNetMon sets the optional the network monitor.
// SetNetMon sets the network monitor.
//
// It should not be changed concurrently with log writes and should
// only be set once.

@ -21,6 +21,7 @@ import (
"sync"
"time"
"tailscale.com/health"
"tailscale.com/net/dns/resolvconffile"
"tailscale.com/net/tsaddr"
"tailscale.com/types/logger"
@ -116,8 +117,9 @@ func restartResolved() error {
// The caller must call Down before program shutdown
// or as cleanup if the program terminates unexpectedly.
type directManager struct {
logf logger.Logf
fs wholeFileFS
logf logger.Logf
health *health.Tracker
fs wholeFileFS
// renameBroken is set if fs.Rename to or from /etc/resolv.conf
// fails. This can happen in some container runtimes, where
// /etc/resolv.conf is bind-mounted from outside the container,
@ -140,14 +142,15 @@ type directManager struct {
}
//lint:ignore U1000 used in manager_{freebsd,openbsd}.go
func newDirectManager(logf logger.Logf) *directManager {
return newDirectManagerOnFS(logf, directFS{})
func newDirectManager(logf logger.Logf, health *health.Tracker) *directManager {
return newDirectManagerOnFS(logf, health, directFS{})
}
func newDirectManagerOnFS(logf logger.Logf, fs wholeFileFS) *directManager {
func newDirectManagerOnFS(logf logger.Logf, health *health.Tracker, fs wholeFileFS) *directManager {
ctx, cancel := context.WithCancel(context.Background())
m := &directManager{
logf: logf,
health: health,
fs: fs,
ctx: ctx,
ctxClose: cancel,

@ -78,7 +78,7 @@ func (m *directManager) checkForFileTrample() {
return
}
if bytes.Equal(cur, want) {
warnTrample.Set(nil)
m.health.SetWarnable(warnTrample, nil)
if lastWarn != nil {
m.mu.Lock()
m.lastWarnContents = nil
@ -101,7 +101,7 @@ func (m *directManager) checkForFileTrample() {
show = show[:1024]
}
m.logf("trample: resolv.conf changed from what we expected. did some other program interfere? current contents: %q", show)
warnTrample.Set(errors.New("Linux DNS config not ideal. /etc/resolv.conf overwritten. See https://tailscale.com/s/dns-fight"))
m.health.SetWarnable(warnTrample, errors.New("Linux DNS config not ideal. /etc/resolv.conf overwritten. See https://tailscale.com/s/dns-fight"))
}
func (m *directManager) closeInotifyOnDone(ctx context.Context, in *gonotify.Inotify) {

@ -42,7 +42,8 @@ const maxActiveQueries = 256
// Manager manages system DNS settings.
type Manager struct {
logf logger.Logf
logf logger.Logf
health *health.Tracker
activeQueriesAtomic int32
@ -54,16 +55,19 @@ type Manager struct {
}
// NewManagers created a new manager from the given config.
// The netMon parameter is optional; if non-nil it's used to do faster interface lookups.
func NewManager(logf logger.Logf, oscfg OSConfigurator, netMon *netmon.Monitor, dialer *tsdial.Dialer, linkSel resolver.ForwardLinkSelector, knobs *controlknobs.Knobs) *Manager {
func NewManager(logf logger.Logf, oscfg OSConfigurator, health *health.Tracker, dialer *tsdial.Dialer, linkSel resolver.ForwardLinkSelector, knobs *controlknobs.Knobs) *Manager {
if dialer == nil {
panic("nil Dialer")
}
if dialer.NetMon() == nil {
panic("Dialer has nil NetMon")
}
logf = logger.WithPrefix(logf, "dns: ")
m := &Manager{
logf: logf,
resolver: resolver.New(logf, netMon, linkSel, dialer, knobs),
resolver: resolver.New(logf, linkSel, dialer, knobs),
os: oscfg,
health: health,
}
m.ctx, m.ctxCancel = context.WithCancel(context.Background())
m.logf("using %T", m.os)
@ -94,10 +98,10 @@ func (m *Manager) Set(cfg Config) error {
return err
}
if err := m.os.SetDNS(ocfg); err != nil {
health.SetDNSOSHealth(err)
m.health.SetDNSOSHealth(err)
return err
}
health.SetDNSOSHealth(nil)
m.health.SetDNSOSHealth(nil)
return nil
}
@ -248,7 +252,7 @@ func (m *Manager) compileConfig(cfg Config) (rcfg resolver.Config, ocfg OSConfig
// This is currently (2022-10-13) expected on certain iOS and macOS
// builds.
} else {
health.SetDNSOSHealth(err)
m.health.SetDNSOSHealth(err)
return resolver.Config{}, OSConfig{}, err
}
}
@ -452,13 +456,15 @@ func (m *Manager) FlushCaches() error {
// CleanUp restores the system DNS configuration to its original state
// in case the Tailscale daemon terminated without closing the router.
// No other state needs to be instantiated before this runs.
func CleanUp(logf logger.Logf, interfaceName string) {
oscfg, err := NewOSConfigurator(logf, interfaceName)
func CleanUp(logf logger.Logf, netMon *netmon.Monitor, interfaceName string) {
oscfg, err := NewOSConfigurator(logf, nil, interfaceName)
if err != nil {
logf("creating dns cleanup: %v", err)
return
}
dns := NewManager(logf, oscfg, nil, &tsdial.Dialer{Logf: logf}, nil, nil)
d := &tsdial.Dialer{Logf: logf}
d.SetNetMon(netMon)
dns := NewManager(logf, oscfg, nil, d, nil, nil)
if err := dns.Down(); err != nil {
logf("dns down: %v", err)
}

@ -8,11 +8,12 @@ import (
"os"
"go4.org/mem"
"tailscale.com/health"
"tailscale.com/types/logger"
"tailscale.com/util/mak"
)
func NewOSConfigurator(logf logger.Logf, ifName string) (OSConfigurator, error) {
func NewOSConfigurator(logf logger.Logf, health *health.Tracker, ifName string) (OSConfigurator, error) {
return &darwinConfigurator{logf: logf, ifName: ifName}, nil
}

@ -5,11 +5,11 @@
package dns
import "tailscale.com/types/logger"
import (
"tailscale.com/health"
"tailscale.com/types/logger"
)
func NewOSConfigurator(logger.Logf, string) (OSConfigurator, error) {
// TODO(dmytro): on darwin, we should use a macOS-specific method such as scutil.
// This is currently not implemented. Editing /etc/resolv.conf does not work,
// as most applications use the system resolver, which disregards it.
func NewOSConfigurator(logger.Logf, *health.Tracker, string) (OSConfigurator, error) {
return NewNoopManager()
}

@ -7,13 +7,14 @@ import (
"fmt"
"os"
"tailscale.com/health"
"tailscale.com/types/logger"
)
func NewOSConfigurator(logf logger.Logf, _ string) (OSConfigurator, error) {
func NewOSConfigurator(logf logger.Logf, health *health.Tracker, _ string) (OSConfigurator, error) {
bs, err := os.ReadFile("/etc/resolv.conf")
if os.IsNotExist(err) {
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
}
if err != nil {
return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err)
@ -23,16 +24,16 @@ func NewOSConfigurator(logf logger.Logf, _ string) (OSConfigurator, error) {
case "resolvconf":
switch resolvconfStyle() {
case "":
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
case "debian":
return newDebianResolvconfManager(logf)
case "openresolv":
return newOpenresolvManager(logf)
default:
logf("[unexpected] got unknown flavor of resolvconf %q, falling back to direct manager", resolvconfStyle())
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
}
default:
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
}
}

@ -31,7 +31,7 @@ func (kv kv) String() string {
var publishOnce sync.Once
func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurator, err error) {
func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (ret OSConfigurator, err error) {
env := newOSConfigEnv{
fs: directFS{},
dbusPing: dbusPing,
@ -40,7 +40,7 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
nmVersionBetween: nmVersionBetween,
resolvconfStyle: resolvconfStyle,
}
mode, err := dnsMode(logf, env)
mode, err := dnsMode(logf, health, env)
if err != nil {
return nil, err
}
@ -52,9 +52,9 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
logf("dns: using %q mode", mode)
switch mode {
case "direct":
return newDirectManagerOnFS(logf, env.fs), nil
return newDirectManagerOnFS(logf, health, env.fs), nil
case "systemd-resolved":
return newResolvedManager(logf, interfaceName)
return newResolvedManager(logf, health, interfaceName)
case "network-manager":
return newNMManager(interfaceName)
case "debian-resolvconf":
@ -63,7 +63,7 @@ func NewOSConfigurator(logf logger.Logf, interfaceName string) (ret OSConfigurat
return newOpenresolvManager(logf)
default:
logf("[unexpected] detected unknown DNS mode %q, using direct manager as last resort", mode)
return newDirectManagerOnFS(logf, env.fs), nil
return newDirectManagerOnFS(logf, health, env.fs), nil
}
}
@ -77,7 +77,7 @@ type newOSConfigEnv struct {
resolvconfStyle func() string
}
func dnsMode(logf logger.Logf, env newOSConfigEnv) (ret string, err error) {
func dnsMode(logf logger.Logf, health *health.Tracker, env newOSConfigEnv) (ret string, err error) {
var debug []kv
dbg := func(k, v string) {
debug = append(debug, kv{k, v})

@ -286,7 +286,7 @@ func TestLinuxDNSMode(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var logBuf tstest.MemLogger
got, err := dnsMode(logBuf.Logf, tt.env)
got, err := dnsMode(logBuf.Logf, nil, tt.env)
if err != nil {
t.Fatal(err)
}

@ -8,6 +8,7 @@ import (
"fmt"
"os"
"tailscale.com/health"
"tailscale.com/types/logger"
)
@ -19,8 +20,8 @@ func (kv kv) String() string {
return fmt.Sprintf("%s=%s", kv.k, kv.v)
}
func NewOSConfigurator(logf logger.Logf, interfaceName string) (OSConfigurator, error) {
return newOSConfigurator(logf, interfaceName,
func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (OSConfigurator, error) {
return newOSConfigurator(logf, health, interfaceName,
newOSConfigEnv{
rcIsResolvd: rcIsResolvd,
fs: directFS{},
@ -33,7 +34,7 @@ type newOSConfigEnv struct {
rcIsResolvd func(resolvConfContents []byte) bool
}
func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEnv) (ret OSConfigurator, err error) {
func newOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string, env newOSConfigEnv) (ret OSConfigurator, err error) {
var debug []kv
dbg := func(k, v string) {
debug = append(debug, kv{k, v})
@ -48,7 +49,7 @@ func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEn
bs, err := env.fs.ReadFile(resolvConf)
if os.IsNotExist(err) {
dbg("rc", "missing")
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
}
if err != nil {
return nil, fmt.Errorf("reading /etc/resolv.conf: %w", err)
@ -60,7 +61,7 @@ func newOSConfigurator(logf logger.Logf, interfaceName string, env newOSConfigEn
}
dbg("resolvd", "missing")
return newDirectManager(logf), nil
return newDirectManager(logf, health), nil
}
func rcIsResolvd(resolvConfContents []byte) bool {

@ -15,6 +15,7 @@ import (
"github.com/google/go-cmp/cmp"
dns "golang.org/x/net/dns/dnsmessage"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
"tailscale.com/tstest"
"tailscale.com/util/dnsname"
@ -87,7 +88,7 @@ func TestDNSOverTCP(t *testing.T) {
SearchDomains: fqdns("coffee.shop"),
},
}
m := NewManager(t.Logf, &f, nil, new(tsdial.Dialer), nil, nil)
m := NewManager(t.Logf, &f, nil, tsdial.NewDialer(netmon.NewStatic()), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver)
m.Set(Config{
Hosts: hosts(
@ -172,7 +173,7 @@ func TestDNSOverTCP_TooLarge(t *testing.T) {
SearchDomains: fqdns("coffee.shop"),
},
}
m := NewManager(log, &f, nil, new(tsdial.Dialer), nil, nil)
m := NewManager(log, &f, nil, tsdial.NewDialer(netmon.NewStatic()), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver)
m.Set(Config{
Hosts: hosts("andrew.ts.com.", "1.2.3.4"),

@ -12,6 +12,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"tailscale.com/net/dns/resolver"
"tailscale.com/net/netmon"
"tailscale.com/net/tsdial"
"tailscale.com/types/dnstype"
"tailscale.com/util/dnsname"
@ -613,7 +614,7 @@ func TestManager(t *testing.T) {
SplitDNS: test.split,
BaseConfig: test.bs,
}
m := NewManager(t.Logf, &f, nil, new(tsdial.Dialer), nil, nil)
m := NewManager(t.Logf, &f, nil, tsdial.NewDialer(netmon.NewStatic()), nil, nil)
m.resolver.TestOnlySetHook(f.SetResolver)
if err := m.Set(test.in); err != nil {

@ -23,6 +23,7 @@ import (
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
"tailscale.com/atomicfile"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/types/logger"
"tailscale.com/util/dnsname"
"tailscale.com/util/winutil"
@ -44,11 +45,11 @@ type windowsManager struct {
closing bool
}
func NewOSConfigurator(logf logger.Logf, interfaceName string) (OSConfigurator, error) {
func NewOSConfigurator(logf logger.Logf, health *health.Tracker, interfaceName string) (OSConfigurator, error) {
ret := &windowsManager{
logf: logf,
guid: interfaceName,
wslManager: newWSLManager(logf),
wslManager: newWSLManager(logf, health),
}
if isWindows10OrBetter() {

@ -84,7 +84,7 @@ func TestManagerWindowsGPCopy(t *testing.T) {
}
defer delIfKey()
cfg, err := NewOSConfigurator(logf, fakeInterface.String())
cfg, err := NewOSConfigurator(logf, nil, fakeInterface.String())
if err != nil {
t.Fatalf("NewOSConfigurator: %v\n", err)
}
@ -213,7 +213,7 @@ func runTest(t *testing.T, isLocal bool) {
}
defer delIfKey()
cfg, err := NewOSConfigurator(logf, fakeInterface.String())
cfg, err := NewOSConfigurator(logf, nil, fakeInterface.String())
if err != nil {
t.Fatalf("NewOSConfigurator: %v\n", err)
}

@ -63,13 +63,14 @@ type resolvedManager struct {
ctx context.Context
cancel func() // terminate the context, for close
logf logger.Logf
ifidx int
logf logger.Logf
health *health.Tracker
ifidx int
configCR chan changeRequest // tracks OSConfigs changes and error responses
}
func newResolvedManager(logf logger.Logf, interfaceName string) (*resolvedManager, error) {
func newResolvedManager(logf logger.Logf, health *health.Tracker, interfaceName string) (*resolvedManager, error) {
iface, err := net.InterfaceByName(interfaceName)
if err != nil {
return nil, err
@ -82,8 +83,9 @@ func newResolvedManager(logf logger.Logf, interfaceName string) (*resolvedManage
ctx: ctx,
cancel: cancel,
logf: logf,
ifidx: iface.Index,
logf: logf,
health: health,
ifidx: iface.Index,
configCR: make(chan changeRequest),
}
@ -163,7 +165,7 @@ func (m *resolvedManager) run(ctx context.Context) {
// Reset backoff and SetNSOSHealth after successful on reconnect.
bo.BackOff(ctx, nil)
health.SetDNSOSHealth(nil)
m.health.SetDNSOSHealth(nil)
return nil
}
@ -241,7 +243,7 @@ func (m *resolvedManager) run(ctx context.Context) {
// Set health while holding the lock, because this will
// graciously serialize the resync's health outcome with a
// concurrent SetDNS call.
health.SetDNSOSHealth(err)
m.health.SetDNSOSHealth(err)
if err != nil {
m.logf("failed to configure systemd-resolved: %v", err)
}

@ -10,7 +10,6 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net"
"net/http"
"net/netip"
@ -186,7 +185,7 @@ type resolverAndDelay struct {
// forwarder forwards DNS packets to a number of upstream nameservers.
type forwarder struct {
logf logger.Logf
netMon *netmon.Monitor
netMon *netmon.Monitor // always non-nil
linkSel ForwardLinkSelector // TODO(bradfitz): remove this when tsdial.Dialer absorbs it
dialer *tsdial.Dialer
@ -214,11 +213,10 @@ type forwarder struct {
cloudHostFallback []resolverAndDelay
}
func init() {
rand.Seed(time.Now().UnixNano())
}
func newForwarder(logf logger.Logf, netMon *netmon.Monitor, linkSel ForwardLinkSelector, dialer *tsdial.Dialer, knobs *controlknobs.Knobs) *forwarder {
if netMon == nil {
panic("nil netMon")
}
f := &forwarder{
logf: logger.WithPrefix(logf, "forward: "),
netMon: netMon,
@ -410,7 +408,6 @@ func (f *forwarder) getKnownDoHClientForProvider(urlBase string) (c *http.Client
SingleHost: dohURL.Hostname(),
SingleHostStaticResult: allIPs,
Logf: f.logf,
NetMon: f.netMon,
})
c = &http.Client{
Transport: &http.Transport{
@ -584,7 +581,7 @@ func (f *forwarder) send(ctx context.Context, fq *forwardQuery, rr resolverAndDe
}
// Kick off the race between the UDP and TCP queries.
rh := race.New[[]byte](timeout, firstUDP, thenTCP)
rh := race.New(timeout, firstUDP, thenTCP)
resp, err := rh.Start(ctx)
if err == nil {
return resp, nil

@ -181,7 +181,7 @@ func WriteRoutes(w *bufio.Writer, routes map[dnsname.FQDN][]*dnstype.Resolver) {
// it delegates to upstream nameservers if any are set.
type Resolver struct {
logf logger.Logf
netMon *netmon.Monitor // or nil
netMon *netmon.Monitor // non-nil
dialer *tsdial.Dialer // non-nil
saveConfigForTests func(cfg Config) // used in tests to capture resolver config
// forwarder forwards requests to upstream nameservers.
@ -205,11 +205,14 @@ type ForwardLinkSelector interface {
}
// New returns a new resolver.
// netMon optionally specifies a network monitor to use for socket rebinding.
func New(logf logger.Logf, netMon *netmon.Monitor, linkSel ForwardLinkSelector, dialer *tsdial.Dialer, knobs *controlknobs.Knobs) *Resolver {
func New(logf logger.Logf, linkSel ForwardLinkSelector, dialer *tsdial.Dialer, knobs *controlknobs.Knobs) *Resolver {
if dialer == nil {
panic("nil Dialer")
}
netMon := dialer.NetMon()
if netMon == nil {
logf("nil netMon")
}
r := &Resolver{
logf: logger.WithPrefix(logf, "resolver: "),
netMon: netMon,

@ -28,6 +28,7 @@ import (
"tailscale.com/net/tsdial"
"tailscale.com/tstest"
"tailscale.com/types/dnstype"
"tailscale.com/types/logger"
"tailscale.com/util/dnsname"
)
@ -313,7 +314,11 @@ func TestRDNSNameToIPv6(t *testing.T) {
}
func newResolver(t testing.TB) *Resolver {
return New(t.Logf, nil /* no network monitor */, nil /* no link selector */, new(tsdial.Dialer), nil /* no control knobs */)
return New(t.Logf,
nil, // no link selector
tsdial.NewDialer(netmon.NewStatic()),
nil, // no control knobs
)
}
func TestResolveLocal(t *testing.T) {
@ -1009,7 +1014,13 @@ func TestForwardLinkSelection(t *testing.T) {
// routes differently.
specialIP := netaddr.IPv4(1, 2, 3, 4)
fwd := newForwarder(t.Logf, nil, linkSelFunc(func(ip netip.Addr) string {
netMon, err := netmon.New(logger.WithPrefix(t.Logf, ".... netmon: "))
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() { netMon.Close() })
fwd := newForwarder(t.Logf, netMon, linkSelFunc(func(ip netip.Addr) string {
if ip == netaddr.IPv4(1, 2, 3, 4) {
return "special"
}

@ -16,6 +16,7 @@ import (
"time"
"golang.org/x/sys/windows"
"tailscale.com/health"
"tailscale.com/types/logger"
"tailscale.com/util/winutil"
)
@ -54,12 +55,14 @@ func wslDistros() ([]string, error) {
// wslManager is a DNS manager for WSL2 linux distributions.
// It configures /etc/wsl.conf and /etc/resolv.conf.
type wslManager struct {
logf logger.Logf
logf logger.Logf
health *health.Tracker
}
func newWSLManager(logf logger.Logf) *wslManager {
func newWSLManager(logf logger.Logf, health *health.Tracker) *wslManager {
m := &wslManager{
logf: logf,
logf: logf,
health: health,
}
return m
}
@ -73,7 +76,7 @@ func (wm *wslManager) SetDNS(cfg OSConfig) error {
}
managers := make(map[string]*directManager)
for _, distro := range distros {
managers[distro] = newDirectManagerOnFS(wm.logf, wslFS{
managers[distro] = newDirectManagerOnFS(wm.logf, wm.health, wslFS{
user: "root",
distro: distro,
})

@ -19,7 +19,6 @@ import (
"time"
"tailscale.com/envknob"
"tailscale.com/net/netmon"
"tailscale.com/types/logger"
"tailscale.com/util/cloudenv"
"tailscale.com/util/singleflight"
@ -90,11 +89,6 @@ type Resolver struct {
// be added to all log messages printed with this logger.
Logf logger.Logf
// NetMon optionally provides a netmon.Monitor to use to get the current
// (cached) network interface.
// If nil, the interface will be looked up dynamically.
NetMon *netmon.Monitor
sf singleflight.Group[string, ipRes]
mu sync.Mutex

@ -28,6 +28,7 @@ import (
"tailscale.com/atomicfile"
"tailscale.com/envknob"
"tailscale.com/health"
"tailscale.com/net/dns/recursive"
"tailscale.com/net/netmon"
"tailscale.com/net/netns"
@ -64,9 +65,10 @@ func MakeLookupFunc(logf logger.Logf, netMon *netmon.Monitor) func(ctx context.C
// fallbackResolver contains the state and configuration for a DNS resolution
// function.
type fallbackResolver struct {
logf logger.Logf
netMon *netmon.Monitor // or nil
sf singleflight.Group[string, resolveResult]
logf logger.Logf
netMon *netmon.Monitor // or nil
healthTracker *health.Tracker // or nil
sf singleflight.Group[string, resolveResult]
// for tests
waitForCompare bool
@ -79,7 +81,7 @@ func (fr *fallbackResolver) Lookup(ctx context.Context, host string) ([]netip.Ad
// recursive resolver. (tailscale/corp#15261) In the future, we might
// change the default (the opt.Bool being unset) to mean enabled.
if disableRecursiveResolver() || !optRecursiveResolver().EqualBool(true) {
return lookup(ctx, host, fr.logf, fr.netMon)
return lookup(ctx, host, fr.logf, fr.healthTracker, fr.netMon)
}
addrsCh := make(chan []netip.Addr, 1)
@ -99,7 +101,7 @@ func (fr *fallbackResolver) Lookup(ctx context.Context, host string) ([]netip.Ad
go fr.compareWithRecursive(ctx, addrsCh, host)
}
addrs, err := lookup(ctx, host, fr.logf, fr.netMon)
addrs, err := lookup(ctx, host, fr.logf, fr.healthTracker, fr.netMon)
if err != nil {
addrsCh <- nil
return nil, err
@ -207,7 +209,7 @@ func (fr *fallbackResolver) compareWithRecursive(
}
}
func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.Monitor) ([]netip.Addr, error) {
func lookup(ctx context.Context, host string, logf logger.Logf, ht *health.Tracker, netMon *netmon.Monitor) ([]netip.Addr, error) {
if ip, err := netip.ParseAddr(host); err == nil && ip.IsValid() {
return []netip.Addr{ip}, nil
}
@ -255,7 +257,7 @@ func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.M
logf("trying bootstrapDNS(%q, %q) for %q ...", cand.dnsName, cand.ip, host)
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
defer cancel()
dm, err := bootstrapDNSMap(ctx, cand.dnsName, cand.ip, host, logf, netMon)
dm, err := bootstrapDNSMap(ctx, cand.dnsName, cand.ip, host, logf, ht, netMon)
if err != nil {
logf("bootstrapDNS(%q, %q) for %q error: %v", cand.dnsName, cand.ip, host, err)
continue
@ -274,14 +276,16 @@ func lookup(ctx context.Context, host string, logf logger.Logf, netMon *netmon.M
// serverName and serverIP of are, say, "derpN.tailscale.com".
// queryName is the name being sought (e.g. "controlplane.tailscale.com"), passed as hint.
func bootstrapDNSMap(ctx context.Context, serverName string, serverIP netip.Addr, queryName string, logf logger.Logf, netMon *netmon.Monitor) (dnsMap, error) {
//
// ht may be nil.
func bootstrapDNSMap(ctx context.Context, serverName string, serverIP netip.Addr, queryName string, logf logger.Logf, ht *health.Tracker, netMon *netmon.Monitor) (dnsMap, error) {
dialer := netns.NewDialer(logf, netMon)
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.Proxy = tshttpproxy.ProxyFromEnvironment
tr.DialContext = func(ctx context.Context, netw, addr string) (net.Conn, error) {
return dialer.DialContext(ctx, "tcp", net.JoinHostPort(serverIP.String(), "443"))
}
tr.TLSClientConfig = tlsdial.Config(serverName, tr.TLSClientConfig)
tr.TLSClientConfig = tlsdial.Config(serverName, ht, tr.TLSClientConfig)
c := &http.Client{Transport: tr}
req, err := http.NewRequestWithContext(ctx, "GET", "https://"+serverName+"/bootstrap-dns?q="+url.QueryEscape(queryName), nil)
if err != nil {

@ -27,7 +27,6 @@ import (
"tailscale.com/derp/derphttp"
"tailscale.com/envknob"
"tailscale.com/net/dnscache"
"tailscale.com/net/interfaces"
"tailscale.com/net/neterror"
"tailscale.com/net/netmon"
"tailscale.com/net/netns"
@ -159,6 +158,11 @@ func cloneDurationMap(m map[int]time.Duration) map[int]time.Duration {
// active probes, and must receive STUN packet replies via ReceiveSTUNPacket.
// Client can be used in a standalone fashion via the Standalone method.
type Client struct {
// NetMon is the netmon.Monitor to use to get the current
// (cached) network interface.
// It must be non-nil.
NetMon *netmon.Monitor
// Verbose enables verbose logging.
Verbose bool
@ -166,13 +170,6 @@ type Client struct {
// If nil, log.Printf is used.
Logf logger.Logf
// NetMon optionally provides a netmon.Monitor to use to get the current
// (cached) network interface.
// If nil, the interface will be looked up dynamically.
// TODO(bradfitz): make NetMon required. As of 2023-08-01, it basically always is
// present anyway.
NetMon *netmon.Monitor
// TimeNow, if non-nil, is used instead of time.Now.
TimeNow func() time.Time
@ -391,7 +388,7 @@ const numIncrementalRegions = 3
// makeProbePlan generates the probe plan for a DERPMap, given the most
// recent report and whether IPv6 is configured on an interface.
func makeProbePlan(dm *tailcfg.DERPMap, ifState *interfaces.State, last *Report) (plan probePlan) {
func makeProbePlan(dm *tailcfg.DERPMap, ifState *netmon.State, last *Report) (plan probePlan) {
if last == nil || len(last.RegionLatency) == 0 {
return makeProbePlanInitial(dm, ifState)
}
@ -471,7 +468,7 @@ func makeProbePlan(dm *tailcfg.DERPMap, ifState *interfaces.State, last *Report)
return plan
}
func makeProbePlanInitial(dm *tailcfg.DERPMap, ifState *interfaces.State) (plan probePlan) {
func makeProbePlanInitial(dm *tailcfg.DERPMap, ifState *netmon.State) (plan probePlan) {
plan = make(probePlan)
for _, reg := range dm.Regions {
@ -781,6 +778,9 @@ func (c *Client) GetReport(ctx context.Context, dm *tailcfg.DERPMap, opts *GetRe
if dm == nil {
return nil, errors.New("netcheck: GetReport: DERP map is nil")
}
if c.NetMon == nil {
return nil, errors.New("netcheck: GetReport: Client.NetMon is nil")
}
c.mu.Lock()
if c.curState != nil {
@ -844,18 +844,7 @@ func (c *Client) GetReport(ctx context.Context, dm *tailcfg.DERPMap, opts *GetRe
return c.finishAndStoreReport(rs, dm), nil
}
var ifState *interfaces.State
if c.NetMon == nil {
directState, err := interfaces.GetState()
if err != nil {
c.logf("[v1] interfaces: %v", err)
return nil, err
} else {
ifState = directState
}
} else {
ifState = c.NetMon.InterfaceState()
}
ifState := c.NetMon.InterfaceState()
// See if IPv6 works at all, or if it's been hard disabled at the
// OS level.
@ -1667,7 +1656,6 @@ func (c *Client) nodeAddr(ctx context.Context, n *tailcfg.DERPNode, proto probeP
Forward: net.DefaultResolver,
UseLastGood: true,
Logf: c.logf,
NetMon: c.NetMon,
}
}
resolver := c.resolver

@ -18,7 +18,7 @@ import (
"testing"
"time"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/stun"
"tailscale.com/net/stun/stuntest"
"tailscale.com/tailcfg"
@ -154,13 +154,19 @@ func TestHairpinWait(t *testing.T) {
})
}
func newTestClient(t testing.TB) *Client {
c := &Client{
NetMon: netmon.NewStatic(),
Logf: t.Logf,
}
return c
}
func TestBasic(t *testing.T) {
stunAddr, cleanup := stuntest.Serve(t)
defer cleanup()
c := &Client{
Logf: t.Logf,
}
c := newTestClient(t)
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
@ -202,9 +208,8 @@ func TestWorksWhenUDPBlocked(t *testing.T) {
dm := stuntest.DERPMapOf(stunAddr)
dm.Regions[1].Nodes[0].STUNOnly = true
c := &Client{
Logf: t.Logf,
}
c := newTestClient(t)
ctx, cancel := context.WithTimeout(context.Background(), 250*time.Millisecond)
defer cancel()
@ -667,7 +672,7 @@ func TestMakeProbePlan(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ifState := &interfaces.State{
ifState := &netmon.State{
HaveV6: tt.have6if,
HaveV4: !tt.no4,
}
@ -895,14 +900,11 @@ func TestNoCaptivePortalWhenUDP(t *testing.T) {
stunAddr, cleanup := stuntest.Serve(t)
defer cleanup()
c := &Client{
Logf: t.Logf,
testEnoughRegions: 1,
// Set the delay long enough that we have time to cancel it
// when our STUN probe succeeds.
testCaptivePortalDelay: 10 * time.Second,
}
c := newTestClient(t)
c.testEnoughRegions = 1
// Set the delay long enough that we have time to cancel it
// when our STUN probe succeeds.
c.testCaptivePortalDelay = 10 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()

@ -24,12 +24,15 @@ import (
// to bind, errors will be returned, if one or both protocols can bind no error
// is returned.
func (c *Client) Standalone(ctx context.Context, bindAddr string) error {
if c.NetMon == nil {
panic("netcheck.Client.NetMon must be set")
}
if bindAddr == "" {
bindAddr = ":0"
}
var errs []error
u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(c.logf, nil)).ListenPacket(ctx, "udp4", bindAddr)
u4, err := nettype.MakePacketListenerWithNetIP(netns.Listener(c.logf, c.NetMon)).ListenPacket(ctx, "udp4", bindAddr)
if err != nil {
c.logf("udp4: %v", err)
errs = append(errs, err)
@ -37,7 +40,7 @@ func (c *Client) Standalone(ctx context.Context, bindAddr string) error {
go readPackets(ctx, c.logf, u4, c.ReceiveSTUNPacket)
}
u6, err := nettype.MakePacketListenerWithNetIP(netns.Listener(c.logf, nil)).ListenPacket(ctx, "udp6", bindAddr)
u6, err := nettype.MakePacketListenerWithNetIP(netns.Listener(c.logf, c.NetMon)).ListenPacket(ctx, "udp6", bindAddr)
if err != nil {
c.logf("udp6: %v", err)
errs = append(errs, err)

@ -7,7 +7,7 @@
//go:build !ios && (darwin || freebsd)
package interfaces
package netmon
import "net"

@ -3,7 +3,7 @@
//go:build ios
package interfaces
package netmon
import (
"log"

@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package interfaces
package netmon
import (
"bytes"

@ -6,7 +6,7 @@
//go:build darwin || freebsd
package interfaces
package netmon
import (
"errors"

@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package interfaces
package netmon
import (
"fmt"

@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package interfaces
package netmon
import (
"errors"

@ -3,7 +3,7 @@
//go:build linux || (darwin && !ts_macext)
package interfaces
package netmon
import (
"testing"

@ -3,7 +3,7 @@
//go:build !linux && !windows && !darwin && !freebsd && !android
package interfaces
package netmon
import "errors"

@ -5,7 +5,7 @@
//go:build freebsd
package interfaces
package netmon
import (
"syscall"

@ -3,7 +3,7 @@
//go:build !android
package interfaces
package netmon
import (
"bufio"

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save