Compare commits

...

349 Commits

Author SHA1 Message Date
Andrea Gottardo e5f67f90a2
xcode: allow ICMP ping relay on macOS + iOS platforms (#12048)
Fixes tailscale/tailscale#10393
Fixes tailscale/corp#15412
Fixes tailscale/corp#19808

On Apple platforms, exit nodes and subnet routers have been unable to relay pings from Tailscale devices to non-Tailscale devices due to sandbox restrictions imposed on our network extensions by Apple. The sandbox prevented the code in netstack.go from spawning the `ping` process which we were using.

Replace that exec call with logic to send an ICMP echo request directly, which appears to work in userspace, and not trigger a sandbox violation in the syslog.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
6 hours ago
Percy Wegmann 59848fe14b drive: rewrite LOCK paths
Fixes #12097

Signed-off-by: Percy Wegmann <percy@tailscale.com>
6 hours ago
James Tucker 87f00d76c4 tool/gocross: treat empty GOOS/GOARCH as native GOOS/GOARCH
Tracking down the side effect can otherwise be a pain, for example on
Darwin an empty GOOS resulted in CGO being implicitly disabled. The user
intended for `export GOOS=` to act like unset, and while this is a
misunderstanding, the main toolchain would treat it this way.

Fixes tailscale/corp#20059

Signed-off-by: James Tucker <james@tailscale.com>
7 hours ago
Irbe Krumina 76c30e014d
cmd/containerboot: warn when an ingress proxy with an IPv4 tailnet address is being created for an IPv6 backend(s) (#12159)
Updates tailscale/tailscale#12156

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
8 hours ago
Maisem Ali 8feb4ff5d2 version: add GitCommitTime to Meta
Updates tailscale/corp#1297

Signed-off-by: Maisem Ali <maisem@tailscale.com>
10 hours ago
Maisem Ali 359ef61263 Revert "version: add Info func to expose EmbeddedInfo"
This reverts commit e3dec086e6.

Going to reuse Meta instead as that is already exported.

Updates tailscale/corp#1297

Signed-off-by: Maisem Ali <maisem@tailscale.com>
10 hours ago
Sonia Appasamy 89947606b2 api.md: document device invite apis
Updates tailscale/corp#18153

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
1 day ago
Sonia Appasamy b094e8c925 api.md: document user invite apis
Updates tailscale/corp#18153

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
1 day ago
Maisem Ali e3dec086e6 version: add Info func to expose EmbeddedInfo
To be used to in a different repo.

Updates tailscale/corp#1297

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 day ago
Kevin Liang 7f83f9fc83 Net/DNS/Publicdns: update the IPv6 range that we use to recreate route endpoint for control D
In this commit I updated the Ipv6 range we use to generate Control D DOH ip, we were using the NextDNSRanges to generate Control D DOH ip, updated to use the correct range.

Updates: #7946
Signed-off-by: Kevin Liang <kevinliang@tailscale.com>
1 day ago
Brad Fitzpatrick 6877d44965 prober: plumb a now-required netmon to derphttp
Updates #11896

Change-Id: Ie2f9cd024d85b51087d297aa36c14a9b8a2b8129
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 day ago
Maisem Ali 1f51bb6891 net/tstun: do SNAT after filterPacketOutboundToWireGuard
In a configuration where the local node (ip1) has a different IP (ip2)
that it uses to communicate with a peer (ip3) we would do UDP flow
tracking on the `ip2->ip3` tuple. When we receive the response from
the peer `ip3->ip2` we would dnat it back to `ip3->ip1` which would
then not match the flow track state and the packet would get dropped.

To fix this, we should do flow tracking on the `ip1->ip3` tuple instead
of `ip2->ip3` which requires doing SNAT after the running filterPacketOutboundToWireGuard.

Updates tailscale/corp#19971, tailscale/corp#8020

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2 days ago
Andrea Gottardo 60266be298
version: fix macOS uploads by increasing build number prefix (#12134)
Fixes tailscale/corp#19979

A build with version number 275 was uploaded to the App Store without bumping OSS first. The presence of that build is causing any 274.* build to be rejected. To address this, added -1 to the year component, which means new builds will use the 275.* prefix.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2 days ago
Andrew Dunham c6d42b1093 derp: remove stats goroutine, use a timer
Without changing behaviour, don't create a goroutine per connection that
sits and sleeps, but rather use a timer that wakes up and gathers
statistics on a regular basis.

Fixes #12127

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ibc486447e403070bdc3c2cd8ae340e7d02854f21
2 days ago
Irbe Krumina 7ef2f72135
util/linuxfw: fix IPv6 availability check for nftables (#12009)
* util/linuxfw: fix IPv6 NAT availability check for nftables

When running firewall in nftables mode,
there is no need for a separate NAT availability check
(unlike with iptables, there are no hosts that support nftables, but not IPv6 NAT - see tailscale/tailscale#11353).
This change fixes a firewall NAT availability check that was using the no-longer set ipv6NATAvailable field
by removing the field and using a method that, for nftables, just checks that IPv6 is available.

Updates tailscale/tailscale#12008

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 days ago
Brad Fitzpatrick 8aa5c3534d ipn/ipnlocal: simplify authURL vs authURLSticky, remove interact field
The previous LocalBackend & CLI 'up' changes improved some stuff, but
might've been too aggressive in some edge cases.

This simplifies the authURL vs authURLSticky distinction and removes
the interact field, which seemed to just just be about duplicate URL
suppression in IPN bus, back from when the IPN bus was a single client
at a time. This moves that suppression to a different spot.

Fixes #12119
Updates #12028
Updates #12042

Change-Id: I1f8800b1e82ccc1c8a0d7abba559e7404ddf41e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 days ago
Parker Higgins 7b3e30f391
words: add some fruit with scales (#8460)
Signed-off-by: Parker Higgins <parker@tailscale.com>
3 days ago
Maisem Ali 79b2d425cf types/views: move AsMap to Map from *Map
This was a typo in 2e19790f61.
It should have been on `Map` and not on `*Map` as otherwise
it doesn't allow for chaining like `someView.SomeMap().AsMap()`
and requires first assigning it to a variable.

Updates #typo

Signed-off-by: Maisem Ali <maisem@tailscale.com>
5 days ago
Charlotte Brandhorst-Satzkorn fc1ae97e10
words: I had a feline we were missing some words (#12098)
pspspsps

Updates #tailscale/corp#14698

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
6 days ago
Maisem Ali 486a423716 tsnet: split user facing and backend logging
This adds a new `UserLogf` field to the `Server` struct.
When set this any logs generated by Server are logged using
`UserLogf` and all spammy backend logs are logged to `Logf`.

If it `UserLogf` is unset, we default to `log.Printf` and
if `Logf` is unset we discard all the spammy logs.

Fixes #12094

Signed-off-by: Maisem Ali <maisem@tailscale.com>
6 days ago
Percy Wegmann 7209c4f91e drive: parse depth 1 PROPFIND results to include children in cache
Clients often perform a PROPFIND for the parent directory before
performing PROPFIND for specific children within that directory.
The PROPFIND for the parent directory is usually done at depth 1,
meaning that we already have information for all of the children.
By immediately adding that to the cache, we save a roundtrip to
the remote peer on the PROPFIND for the specific child.

Updates tailscale/corp#19779

Signed-off-by: Percy Wegmann <percy@tailscale.com>
6 days ago
Irbe Krumina d86d1e7601
cmd/k8s-operator,cmd/containerboot,ipn,k8s-operator: turn off stateful filter for egress proxies. (#12075)
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.

Allow configuring stateful filter via tailscaled config file.

Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.

Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.

Updates tailscale/tailscale#12061

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
6 days ago
Claire Wang e070af7414
ipnlocal, magicsock: add more description to storing last suggested exit (#11998)
node related functions
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
6 days ago
Andrew Dunham 5708fc0639 wgengine/router: print Docker warning when stateful filtering is enabled
When Docker is detected on the host and stateful filtering is enabled,
Docker containers may be unable to reach Tailscale nodes (depending on
the network settings of a container). Detect Docker when stateful
filtering is enabled and print a health warning to aid users in noticing
this issue.

We avoid printing the warning if the current node isn't advertising any
subnet routes and isn't an exit node, since without one of those being
true, the node wouldn't have the correct AllowedIPs in WireGuard to
allow a Docker container to connect to another Tailscale node anyway.

Updates #12070

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Idef538695f4d101b0ef6f3fb398c0eaafc3ae281
1 week ago
Andrew Dunham 25e32cc3ae util/linuxfw: fix table name in DelStatefulRule
Updates #12061
Follow-up to #12072

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I2ba8c4bff14d93816760ff5eaa1a16f17bad13c1
1 week ago
Maisem Ali 21abb7f402 cmd/tailscale: add missing set flags for linux
We were missing `snat-subnet-routes`, `stateful-filtering`
and `netfilter-mode`. Add those to set too.

Fixes #12061

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Anton Tolchanov ac638f32c0 util/linuxfw: fix stateful packet filtering in nftables mode
To match iptables:
b5dbf155b1/util/linuxfw/iptables_runner.go (L536)

Updates #12066

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
1 week ago
Irbe Krumina b5dbf155b1
cmd/k8s-operator: default nameserver image to tailscale/k8s-nameserver:unstable (#11991)
We are now publishing nameserver images to tailscale/k8s-nameserver,
so we can start defaulting the images if users haven't set
them explicitly, same as we already do with proxy images.

The nameserver images are currently only published for unstable
track, so we have to use the static 'unstable' tag.
Once we start publishing to stable, we can make the operator
default to its own tag (because then we'll know that for each
operator tag X there is also a nameserver tag X as we always
cut all images for a given tag.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 week ago
Andrew Dunham 8f7f9ac17e wgengine/netstack: handle 4via6 routes that are advertised by the same node
Previously, a node that was advertising a 4via6 route wouldn't be able
to make use of that same route; the packet would be delivered to
Tailscale, but since we weren't accepting it in handleLocalPackets, the
packet wouldn't be delivered to netstack and would never hit the 4via6
logic. Let's add that support so that usage of 4via6 is consistent
regardless of where the connection is initiated from.

Updates #11304

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic28dc2e58080d76100d73b93360f4698605af7cb
1 week ago
Nick O'Neill 7901925ad3
VERSION.txt: this is v1.67.0 (#12063)
Signed-off-by: Nick O'Neill <nick@tailscale.com>
1 week ago
Sonia Appasamy 8130656780 api.md: remove extraneous commas in json examples
Updates #cleanup

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
1 week ago
Anton Tolchanov 6f4a1dc6bf ipn/ipnlocal: fix another read of keyExpired outside mutex
Updates #12039

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
1 week ago
Brad Fitzpatrick e968b0ecd7 cmd/tailscale,controlclient,ipnlocal: fix 'up', deflake tests more
The CLI's "up" is kinda chaotic and LocalBackend.Start is kinda
chaotic and they both need to be redone/deleted (respectively), but
this fixes some buggy behavior meanwhile. We were previously calling
StartLoginInteractive (to start the controlclient's RegisterRequest)
redundantly in some cases, causing test flakes depending on timing and
up's weird state machine.

We only need to call StartLoginInteractive in the client if Start itself
doesn't. But Start doesn't tell us that. So cheat a bit and a put the
information about whether there's a current NodeKey in the ipn.Status.
It used to be accessible over LocalAPI via GetPrefs as a private key but
we removed that for security. But a bool is fine.

So then only call StartLoginInteractive if that bool is false and don't
do it in the WatchIPNBus loop.

Fixes #12028
Updates #12042

Change-Id: I0923c3f704a9d6afd825a858eb9a63ca7c1df294
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Brad Fitzpatrick e5ef35857f ipn/ipnlocal: fix read of keyExpired outside mutex
Fixes #12039

Change-Id: I28c8a282ce12619f17103e9535841f15394ce685
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Brad Fitzpatrick 21509db121 ipn/ipnlocal, all: plumb health trackers in tests
I saw some panics in CI, like:

    2024-05-08T04:30:25.9553518Z ## WARNING: (non-fatal) nil health.Tracker (being strict in CI):
    2024-05-08T04:30:25.9554043Z goroutine 801 [running]:
    2024-05-08T04:30:25.9554489Z tailscale.com/health.(*Tracker).nil(0x0)
    2024-05-08T04:30:25.9555086Z 	tailscale.com/health/health.go:185 +0x70
    2024-05-08T04:30:25.9555688Z tailscale.com/health.(*Tracker).SetUDP4Unbound(0x0, 0x0)
    2024-05-08T04:30:25.9556373Z 	tailscale.com/health/health.go:532 +0x2f
    2024-05-08T04:30:25.9557296Z tailscale.com/wgengine/magicsock.(*Conn).bindSocket(0xc0003b4808, 0xc0003b4878, {0x1fbca53, 0x4}, 0x0)
    2024-05-08T04:30:25.9558301Z 	tailscale.com/wgengine/magicsock/magicsock.go:2481 +0x12c5
    2024-05-08T04:30:25.9559026Z tailscale.com/wgengine/magicsock.(*Conn).rebind(0xc0003b4808, 0x0)
    2024-05-08T04:30:25.9559874Z 	tailscale.com/wgengine/magicsock/magicsock.go:2510 +0x16f
    2024-05-08T04:30:25.9561038Z tailscale.com/wgengine/magicsock.NewConn({0xc000063c80, 0x0, 0xc000197930, 0xc000197950, 0xc000197960, {0x0, 0x0}, 0xc000197970, 0xc000198ee0, 0x0, ...})
    2024-05-08T04:30:25.9562402Z 	tailscale.com/wgengine/magicsock/magicsock.go:476 +0xd5f
    2024-05-08T04:30:25.9563779Z tailscale.com/wgengine.NewUserspaceEngine(0xc000063c80, {{0x22c8750, 0xc0001976b0}, 0x0, {0x22c3210, 0xc000063c80}, {0x22c31d8, 0x2d3c900}, 0x0, 0x0, ...})
    2024-05-08T04:30:25.9564982Z 	tailscale.com/wgengine/userspace.go:389 +0x159d
    2024-05-08T04:30:25.9565529Z tailscale.com/ipn/ipnlocal.newTestBackend(0xc000358b60)
    2024-05-08T04:30:25.9566086Z 	tailscale.com/ipn/ipnlocal/serve_test.go:675 +0x2a5
    2024-05-08T04:30:25.9566612Z ta

Updates #11874

Change-Id: I3432ed52d670743e532be4642f38dbd6e3763b1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Brad Fitzpatrick 727c0d6cfd ipn/ipnserver: close a small race in ipnserver, ~simplify code
There was a small window in ipnserver after we assigned a LocalBackend
to the ipnserver's atomic but before we Start'ed it where our
initalization Start could conflict with API calls from the LocalAPI.

Simplify that a bit and lay out the rules in the docs.

Updates #12028

Change-Id: Ic5f5e4861e26340599184e20e308e709edec68b1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Maisem Ali 32bc596062 ipn/ipnlocal: acquire b.mu once in Start
We used to Lock, Unlock, Lock, Unlock quite a few
times in Start resulting in all sorts of weird race
conditions. Simplify it all and only Lock/Unlock once.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Maisem Ali 9380e2dfc6 ipn/ipnlocal: use lockAndGetUnlock in Start
This removes one of the Lock,Unlock,Lock,Unlock at least in
the Start function. Still has 3 more of these.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Maisem Ali e1011f1387 ipn/ipnlocal: call SetNetInfoCallback from NewLocalBackend
Instead of calling it from Start everytime, call it from NewLocalBackend
once.

Updates #11649

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Maisem Ali 85b9a6c601 net/netcheck: do not add derps if IPv4/IPv6 is set to "none"
It was documented as such but seems to have been dropped in a
refactor, restore the behavior. This brings down the time it
takes to run a single integration test by 2s which adds up
quite a bit.

Updates tailscale/corp#19786

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Brad Fitzpatrick d7bdd8e2a7 go.toolchain.rev: update to Go 1.22.3
Updates #12044

Change-Id: I4ad16f2bfcec13735cb10713e028b2c5527501ed
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
kari-ts 3c4c9dc1d2
web: use EditPrefs instead of passing UpdatePrefs to starting (#12040)
Web version of https://github.com/tailscale/tailscale-android/pull/370
This allows us to update the prefs rather than creating new prefs

Updates tailscale/tailscale#11731

Signed-off-by: kari-ts <kari@tailscale.com>
1 week ago
Brad Fitzpatrick 80df8ffb85 control/controlclient: early return and outdent some code
I found this too hard to read before.

This is pulled out of #12033 as it's unrelated cleanup in retrospect.

Updates #12028

Change-Id: I727c47e573217e3d1973c5b66a76748139cf79ee
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Andrew Lytvynov 471731771c
ipn/ipnlocal: set default NoStatefulFiltering in ipn.NewPrefs (#12031)
This way the default gets populated on first start, when no existing
state exists to migrate. Also fix `ipn.PrefsFromBytes` to preserve empty
fields, rather than layering `NewPrefs` values on top.

Updates https://github.com/tailscale/corp/issues/19623

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
1 week ago
Paul Scott 78fa698fe6 cmd/tailscale/cli/ffcomplete: remove fullstop from ShortHelp
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
1 week ago
Maisem Ali 482890b9ed tailcfg: bump capver for using NodeAttrUserDialUseRoutes for DNS
Missed in f62e678df8.

Updates tailscale/corp#18725
Updates #4529

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Maisem Ali af97e7a793 tailcfg,all: add/plumb Node.IsJailed
This adds a new bool that can be sent down from control
to do jailing on the client side. Previously this would
only be done from control by modifying the packet filter
we sent down to clients. This would result in a lot of
additional work/CPU on control, we could instead just
do this on the client. This has always been a TODO which
we keep putting off, might as well do it now.

Updates tailscale/corp#19623

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 week ago
Maisem Ali e67069550b ipn/ipnlocal,net/tstun,wgengine: create and plumb jailed packet filter
This plumbs a packet filter for jailed nodes through to the
tstun.Wrapper; the filter for a jailed node is equivalent to a "shields
up" filter. Currently a no-op as there is no way for control to
tell the client whether a peer is jailed.

Updates tailscale/corp#19623

Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Change-Id: I5ccc5f00e197fde15dd567485b2a99d8254391ad
1 week ago
Nick Khyl f62e678df8 net/dns/resolver, control/controlknobs, tailcfg: use UserDial instead of SystemDial to dial DNS servers
Now that tsdial.Dialer.UserDial has been updated to honor the configured routes
and dial external network addresses without going through Tailscale, while also being
able to dial a node/subnet router on the tailnet, we can start using UserDial to forward
DNS requests. This is primarily needed for DNS over TCP when forwarding requests
to internal DNS servers, but we also update getKnownDoHClientForProvider to use it.

Updates tailscale/corp#18725

Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 week ago
Andrew Lytvynov c28f5767bf
various: implement stateful firewalling on Linux (#12025)
Updates https://github.com/tailscale/corp/issues/19623


Change-Id: I7980e1fb736e234e66fa000d488066466c96ec85

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
1 week ago
Maisem Ali 5ef178fdca net/tstun: refactor peerConfig to allow storing more details
This refactors the peerConfig struct to allow storing more
details about a peer and not just the masq addresses. To be
used in a follow up change.

As a side effect, this also makes the DNAT logic on the inbound
packet stricter. Previously it would only match against the packets
dst IP, not it also takes the src IP into consideration. The beahvior
is at parity with the SNAT case.

Updates tailscale/corp#19623

Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Change-Id: I5f40802bebbf0f055436eb8824e4511d0052772d
1 week ago
Brad Fitzpatrick f3d2fd22ef cmd/tailscale/cli: don't start WatchIPNBus until after up's initial Start
The CLI "up" command is a historical mess, both on the CLI side and
the LocalBackend side. We're getting closer to cleaning it up, but in
the meantime it was again implicated in flaky tests.

In this case, the background goroutine running WatchIPNBus was very
occasionally running enough to get to its StartLoginInteractive call
before the original goroutine did its Start call. That meant
integration tests were very rarely but sometimes logging in with the
default control plane URL out on the internet
(controlplane.tailscale.com) instead of the localhost control server
for tests.

This also might've affected new Headscale etc users on initial "up".

Fixes #11960
Fixes #11962

Change-Id: I36f8817b69267a99271b5ee78cb7dbf0fcc0bd34
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Brad Fitzpatrick aadb8d9d21 ipn/ipnlocal: don't send an empty BrowseToURL w/ WatchIPNBus NotifyInitialState
I noticed this while working on the following fix to #11962.

Updates #11962

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Change-Id: I4c5894d8899d1ae8c42f54ecfd4d05a4a7ac598c
1 week ago
Brad Fitzpatrick e26f76a1c4 tstest/integration: add more debugging, logs to catch flaky test
Updates #11962

Change-Id: I1ab0db69bdf8d1d535aa2cef434c586311f0fe18
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Nick Khyl caa3d7594f ipn/ipnlocal, net/tsdial: plumb routes into tsdial and use them in UserDial
We'd like to use tsdial.Dialer.UserDial instead of SystemDial for DNS over TCP.
This is primarily necessary to properly dial internal DNS servers accessible
over Tailscale and subnet routes. However, to avoid issues when switching
between Wi-Fi and cellular, we need to ensure that we don't retain connections
to any external addresses on the old interface. Therefore, we need to determine
which dialer to use internally based on the configured routes.

This plumbs routes and localRoutes from router.Config to tsdial.Dialer,
and updates UserDial to use either the peer dialer or the system dialer,
depending on the network address and the configured routes.

Updates tailscale/corp#18725
Fixes #4529

Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 week ago
Brad Fitzpatrick ce8969d82b net/portmapper: add envknob to disable portmapper in localhost integration tests
Updates #11962

Change-Id: I8212cd814985b455d96986de0d4c45f119516cb3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Brad Fitzpatrick 7e0dd61e61 ipn/ipnlocal, tstest/integration: add panic to catch flaky test in the act
Updates #11962

Change-Id: Ifa24b82f9c76639bfd83278a7c2fe9cf42897bbb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
License Updater 258b5042fe licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
1 week ago
Brad Fitzpatrick c3c18027c6 all: make more tests pass/skip in airplane mode
Updates tailscale/corp#19786

Change-Id: Iedc6730fe91c627b556bff5325bdbaf7bf79d8e6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Claire Wang 41f2195899
util/syspolicy: add auto exit node related keys (#11996)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
1 week ago
Brad Fitzpatrick 1a963342c7 util/set: add Of variant of SetOf that takes variadic parameter
set.Of(1, 2, 3) is prettier than set.SetOf([]int{1, 2, 3}).

I was going to change the signature of SetOf but then I noticed its
name has stutter anyway, so I kept it for compatibility. People can
prefer to use set.Of for new code or slowly migrate.

Also add a lazy Make method, which I often find myself wanting,
without having to resort to uglier mak.Set(&set, k, struct{}{}).

Updates #cleanup

Change-Id: Ic6f3870115334efcbd65e79c437de2ad3edb7625
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Will Norris 80decd83c1 tsweb: remove redundant bumpStartIfNeeded func
Updates #12001

Signed-off-by: Will Norris <will@tailscale.com>
2 weeks ago
Maisem Ali ed843e643f types/views: add AppendStrings util func
Updates tailscale/corp#19623

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2 weeks ago
Maisem Ali fd6ba43b97 types/views: remove duplicate SliceContainsFunc
We already have `(Slice[T]).ContainsFunc`.

Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2 weeks ago
Will Norris 46980c9664 tsweb: ensure in-flight requests are always marked as finished
The inflight request tracker only starts recording a new bucket after
the first non-error request. Unfortunately, it's written in such a way
that ONLY successful requests are ever marked as being finished. Once a
bucket has had at least one successful request and begun to be tracked,
all subsequent error cases are never marked finished and always appear
as in-flight.

This change ensures that if a request is recorded has having been
started, we also mark it as finished at the end.

Updates tailscale/corp#19767

Signed-off-by: Will Norris <will@tailscale.com>
2 weeks ago
Percy Wegmann 817badf9ca ipn/ipnlocal: reuse transport across Taildrive remotes
This prevents us from opening a new connection on each HTTP
request.

Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 2cf764e998 drive: actually cache results on statcache
Updates #11967

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Irbe Krumina 406293682c
cmd/k8s-operator: cleanup runReconciler signature (#11993)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Claire Wang 35872e86d2
ipnlocal, magicsock: store last suggested exit node id in local backend (#11959)
Updates tailscale/corp#19681

Signed-off-by: Claire Wang <claire@tailscale.com>
2 weeks ago
Brad Fitzpatrick b62cfc430a tstest/integration/testcontrol: fix data race
Noticed in earlier GitHub actions failure.

Fixes #11994

Change-Id: Iba8d753caaa3dacbe2da9171d96c5f99b12e62d7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Andrew Dunham e9505e5432 ipn/ipnlocal: plumb health.Tracker into profileManager constructor
Setting the field after-the-fact wasn't working because we could migrate
prefs on creation, which would set health status for auto updates.

Updates #11986

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I41d79ebd61d64829a3a9e70586ce56f62d24ccfd
2 weeks ago
Brad Fitzpatrick e42c4396cf net/netcheck: don't spam on ICMP socket permission denied errors
While debugging a failing test in airplane mode on macOS, I noticed
netcheck logspam about ICMP socket creation permission denied errors.

Apparently macOS just can't do those, or at least not in airplane
mode. Not worth spamming about.

Updates #cleanup

Change-Id: I302620cfd3c8eabb25202d7eef040c01bd8a843c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 15fc6cd966 derp/derphttp: fix netcheck HTTPS probes
The netcheck client, when no UDP is available, probes distance using
HTTPS.

Several problems:

* It probes using /derp/latency-check.
* But cmd/derper serves the handler at /derp/probe
* Despite the difference, it work by accident until c8f4dfc8c0
  which made netcheck's probe require a 2xx status code.
* in tests, we only use derphttp.Handler, so the cmd/derper-installed
  mux routes aren't preesnt, so there's no probe. That breaks
  tests in airplane mode. netcheck.Client then reports "unexpected
  HTTP status 426" (Upgrade Required)

This makes derp handle both /derp/probe and /derp/latency-check
equivalently, and in both cmd/derper and derphttp.Handler standalone
modes.

I notice this when wgengine/magicsock TestActiveDiscovery was failing
in airplane mode (no wifi). It still doesn't pass, but it gets
further.

Fixes #11989

Change-Id: I45213d4bd137e0f29aac8bd4a9ac92091065113f
2 weeks ago
Brad Fitzpatrick 1fe0983f2d cmd/derper,tstest/nettest: skip network-needing test in airplane mode
Not buying wifi on a short flight is a good way to find tests
that require network. Whoops.

Updates #cleanup

Change-Id: Ibe678e9c755d27269ad7206413ffe9971f07d298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 46f3feae96 ssh/tailssh: plumb health.Tracker in test
In prep for it being required in more places.

Updates #11874

Change-Id: Ib743205fc2a6c6ff3d2c4ed3a2b28cac79156539
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 4fa6cbec27 ssh/tailssh: use ptr.To in test
Updates #cleanup

Change-Id: Ic98ba1b63c8205084b30f59f0ca343788edea5b0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick ee3bd4dbda derp/derphttp, net/netcheck: plumb netmon.Monitor to derp netcheck client
Fixes #11981

Change-Id: I0e15a09f93aefb3cfddbc12d463c1c08b83e09fd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Percy Wegmann a03cb866b4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 745fb31bd4 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 07e783c7be drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 3349e86c0a drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 0c11fd978b drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Percy Wegmann 9d22ec0ba2 drive: use secret token to authenticate access to file server on localhost
This prevents Mark-of-the-Web bypass attacks in case someone visits the
localhost WebDAV server directly.

Fixes tailscale/corp#19592

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Irbe Krumina cd633a7252
cmd/k8s-operator/deploy,k8s-operator: document that metrics are unstable (#11979)
Updates#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Andrew Dunham f97d0ac994 net/dns/resolver: add better error wrapping
To aid in debugging exactly what's going wrong, instead of the
not-particularly-useful "dns udp query: context deadline exceeded" error
that we currently get.

Updates #3786
Updates #10768
Updates #11620
(etc.)

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I76334bf0681a8a2c72c90700f636c4174931432c
2 weeks ago
Claire Wang e0287a4b33
wgengine: add exit destination logging enable for wgengine logger (#11952)
Updates tailscale/corp#18625
Co-authored-by: Kevin Liang <kevinliang@tailscale.com>
Signed-off-by: Claire Wang <claire@tailscale.com>
2 weeks ago
Irbe Krumina 19b31ac9a6
cmd/{k8s-operator,k8s-nameserver},k8s-operator: update nameserver config with records for ingress/egress proxies (#11019)
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.

This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.

The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.

No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.

It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns
https://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers

See also https://github.com/tailscale/tailscale/pull/11017

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Maisem Ali a49ed2e145 derp,ipn/ipnlocal: stop calling rand.Seed
It's deprecated and using it gets us the old slow behavior
according to https://go.dev/blog/randv2.

> Having eliminated repeatability of the global output stream, Go 1.20
> was also able to make the global generator scale better in programs
> that don’t call rand.Seed, replacing the Go 1 generator with a very
> cheap per-thread wyrand generator already used inside the Go
> runtime. This removed the global mutex and made the top-level
> functions scale much better. Programs that do call rand.Seed fall
> back to the mutex-protected Go 1 generator.

Updates #7123

Change-Id: Ia5452e66bd16b5457d4b1c290a59294545e13291
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2 weeks ago
Brad Fitzpatrick 96712e10a7 health, ipn/ipnlocal: move more health warning code into health.Tracker
In prep for making health warnings rich objects with metadata rather
than a bunch of strings, start moving it all into the same place.

We'll still ultimately need the stringified form for the CLI and
LocalAPI for compatibility but we'll next convert all these warnings
into Warnables that have severity levels and such, and legacy
stringification will just be something each Warnable thing can do.

Updates #4136

Change-Id: I83e189435daae3664135ed53c98627c66e9e53da
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Andrew Dunham be663c84c1 net/tstun: rename natConfig to peerConfig
So that we can use this for additional, non-NAT configuration without it
being confusing.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1658d59c9824217917a94ee76d2d08f0a682986f
2 weeks ago
Andrew Dunham 10497acc95 net/tstun: refactor natConfig to not be per-family
This was a holdover from the older, pre-BART days and is no longer
necessary.

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I71b892bab1898077767b9ff51cef33d59c08faf8
2 weeks ago
Andrew Lytvynov 13e1355546
scripts/installer.sh: remove unnecessary escaping in grep (#11950)
Updates #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Percy Wegmann 843afe7c53 ssh/tailssh: add integration test
Updates tailscale/corp#11854

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Jonathan Nobels 45b9aa0d83
net/netmon: remove spammy log statements (#11953)
Updates tailscale/corp#18960

Tests in corp called us using the wrong logging calls.  Removed.
This is logged downstream anyway.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2 weeks ago
Paul Scott 4c08410011 cmd/tailscale/cli: set localClient.UseSocketOnly during flag parsing
This configures localClient correctly during flag parsing, so that the --socket
option is effective when generating tab-completion results. For example, the
following would not connect to the system Tailscale for tab-completion results:

    tailscale --socket=/tmp/tailscaled.socket switch <TAB>

Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2 weeks ago
Paul Scott ba34943133 cmd/tailscale/cli/ffcomplete: omit and clean completion results
Updates #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
2 weeks ago
Jonathan Nobels fa1303d632
net/netmon: swap to swift-derived defaultRoute on macos (#11936)
Updates tailscale/corp#18960

iOS uses Apple's NetworkMonitor to track the default interface and
there's no reason we shouldn't also use this on macOS, for the same
reasons noted in the comments for why this change was made on iOS.

This eliminates the need to load and parse the routing table when
querying the defaultRouter() in almost all cases.

A slight modification here (on both platforms) to fallback to the default
BSD logic in the unhappy-path rather than making assumptions that
may not hold.  If netmon is eventually parsing AF_ROUTE and able
to give a consistently correct answer for the  default interface index,
we can fall back to that and eliminate the Swift dependency.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2 weeks ago
Gabe Gorelick de85610be0
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.

Fixes #11947

Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
2 weeks ago
Percy Wegmann 2648d475d7 drive: don't allow DELETE on read-only shares
Fixes tailscale/corp#19646

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 weeks ago
Brad Fitzpatrick 7455e027e9 util/slicesx: add AppendMatching
We had this in a different repo, but moving it here, as this a more
fitting package.

Updates #cleanup

Change-Id: I5fb9b10e465932aeef5841c67deba4d77d473d57
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Andrew Dunham fe009c134e ipn/ipnlocal: reset the dialPlan only when the URL is unchanged
Also, reset it in a few more places (e.g. logout, new blank profiles,
etc.) to avoid a few more cases where a pre-existing dialPlan can cause
a new Headscale server take 10+ seconds to connect.

Updates #11938

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I3095173a5a3d9720507afe4452548491e9e45a3e
2 weeks ago
Brad Fitzpatrick c47f9303b0 types/views: use slices.Contains{,Func}
Updates #8419

Change-Id: Ib1a9cb3fb425284b7e02684072a4e7a35975f35c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Joe Tsai 5db80cf2d8
syncs: fix AtomicValue for interface kinds (#11943)
If AtomicValue[T] is used with a T that is an interface kind,
then Store may panic if different concret types are ever stored.

Fix this by always wrapping in a concrete type.
Technically, this is only needed if T is an interface kind,
but there is no harm in doing it also for non-interface kinds.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 weeks ago
Irbe Krumina 44aa809cb0
cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11919)
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.

Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.

The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.

DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Shaw Drastin 1fe073098c
Reset dial plan when switching profile (#11933)
When switching profile, the server URL can change (e.g.
because of switching to a self-hosted headscale instance).

If it is not reset here, dial plans returned by old
server (e.g. tailscale control server) will be used to
connect to new server (e.g. self-hosted headscale server),
and the register request will be blocked by it until
timeout, leading to very slow profile switches.

Updates #11938 11938

Signed-off-by: Shaw Drastin <showier.drastic0a@icloud.com>
2 weeks ago
Jordan Whited a47ce618bd
net/tstun: implement env var for disabling UDP GRO on Linux (#11924)
Certain device drivers (e.g. vxlan, geneve) do not properly handle
coalesced UDP packets later in the stack, resulting in packet loss.

Updates #11026

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2 weeks ago
Mario Minardi ec04c677c0
api.md: add documentation for new split DNS endpoints (#11922)
Add documentation for GET/PATCH/PUT `api/v2/tailnet/<ID>/dns/split-dns`.
These endpoints allow for reading, partially updating, and replacing the
split DNS settings for a given tailnet.

Updates https://github.com/tailscale/corp/issues/19483

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 weeks ago
Andrew Lytvynov 7ba8f03936
ipn/ipnlocal: fix TestOnTailnetDefaultAutoUpdate on unsupported platforms (#11921)
Fixes #11894

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Irbe Krumina 7d9c3f9897
cmd/k8s-operator/deploy/manifests: check if IPv6 module is loaded before using it (#11867)
Before attempting to enable IPv6 forwarding in the proxy init container
check if the relevant module is found, else the container crashes
on hosts that don't have it.

Updates#11860

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Andrew Lytvynov d02f1be46a
scripts/installer.sh: enable Alpine community repo if needed (#11837)
The tailscale package is in the community Alpine repo. Check if it's
commented out in `/etc/apk/repositories` and run `setup-apkrepos -c -1`
if it's not.

Fixes #11263

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Claire Wang 5254f6de06
tailcfg: add suggest exit node UI node attribute (#11918)
Add node attribute to determine whether or not to show suggested exit
node in UI.
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
2 weeks ago
Andrew Lytvynov ce5c80d0fe
clientupdate: exec systemctl instead of using dbus to restart (#11923)
Shell out to "systemctl", which lets us drop an extra dependency.

Updates https://github.com/tailscale/corp/issues/18935

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Fran Bull 6a0fbacc28 appc: setting AdvertiseRoutes explicitly discards app connector routes
This fixes bugs where after using the cli to set AdvertiseRoutes users
were finding that they had to restart tailscaled before the app
connector would advertise previously learned routes again. And seems
more in line with user expectations.

Fixes #11006
Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Fran Bull c27dc1ca31 appc: unadvertise routes when reconfiguring app connector
If the controlknob to persist app connector routes is enabled, when
reconfiguring an app connector unadvertise routes that are no longer
relevant.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Fran Bull fea2e73bc1 appc: write discovered domains to StateStore
If the controlknob is on.
This will allow us to remove discovered routes associated with a
particular domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Fran Bull 1bd1b387b2 appc: add flag shouldStoreRoutes and controlknob for it
When an app connector is reconfigured and domains to route are removed,
we would like to no longer advertise routes that were discovered for
those domains. In order to do this we plan to store which routes were
discovered for which domains.

Add a controlknob so that we can enable/disable the new behavior.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Fran Bull 79836e7bfd appc: add RouteInfo struct and persist it to StateStore
Lays the groundwork for the ability to persist app connectors discovered
routes, which will allow us to stop advertising routes for a domain if
the app connector no longer monitors that domain.

Updates #11008
Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Andrew Dunham b2b49cb3d5 wgengine/wgcfg/nmcfg: skip expired peers
Updates tailscale/corp#19315

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1ad0c8796efe3dd456280e51efaf81f6d2049772
2 weeks ago
Mario Minardi 74c399483c
api.md: explicitly set content-type headers in POST CURL examples (#11916)
Explicitly set `-H "Content-Type: application/json"` in CURL examples
for POST endpoints as the default content type used by CURL is otherwise
`application/x-www-form-urlencoded` and these endpoints expect JSON data.

Updates https://github.com/tailscale/tailscale/issues/11914

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 weeks ago
Irbe Krumina 1452faf510
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login on kube, check Secret create perms, allow empty state Secret (#11326)
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms

* Allow users to pre-create empty state Secrets

* Add a fake internal kube client, test functionality that has dependencies on kube client operations.

* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist

* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed

Updates tailscale/tailscale#11170

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 weeks ago
Kristoffer Dalby 1e6cdb7d86 api.md: fix missing links after move of device posture
Updates tailscale/corp#18572

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
3 weeks ago
Brad Fitzpatrick b9adbe2002 net/{interfaces,netmon}, all: merge net/interfaces package into net/netmon
In prep for most of the package funcs in net/interfaces to become
methods in a long-lived netmon.Monitor that can cache things.  (Many
of the funcs are very heavy to call regularly, whereas the long-lived
netmon.Monitor can subscribe to things from the OS and remember
answers to questions it's asked regularly later)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: Ie4e8dedb70136af2d611b990b865a822cd1797e5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 6b95219e3a net/netmon, add: add netmon.State type alias of interfaces.State
... in prep for merging the net/interfaces package into net/netmon.

This is a no-op change that updates a bunch of the API signatures ahead of
a future change to actually move things (and remove the type alias)

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I477613388f09389214db0d77ccf24a65bff2199c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Irbe Krumina 45f0721530
cmd/containerboot: wait on tailscaled process only (#11897)
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.

Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.

Updates tailscale/tailscale#11593

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
Brad Fitzpatrick 3672f29a4e net/netns, net/dns/resolver, etc: make netmon required in most places
The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached. But first (this change and others)
we need to make sure the one netmon.Monitor is plumbed everywhere.

Some notable bits:

* tsdial.NewDialer is added, taking a now-required netmon

* because a tsdial.Dialer always has a netmon, anything taking both
  a Dialer and a NetMon is now redundant; take only the Dialer and
  get the NetMon from that if/when needed.

* netmon.NewStatic is added, primarily for tests

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I877f9cb87618c4eb037cee098241d18da9c01691
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4f73a26ea5 ipn/ipnlocal: skip TestOnTailnetDefaultAutoUpdate on macOS for now
While it's broken.

Updates #11894

Change-Id: I24698707ffe405471a14ab2683aea7e836531da8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 7a62dddeac net/netcheck, wgengine/magicsock: make netmon.Monitor required
This has been a TODO for ages. Time to do it.

The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: I60fc6508cd2d8d079260bda371fc08b6318bcaf1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4dece0c359 net/netutil: remove a use of deprecated interfaces.GetState
I'm working on moving all network state queries to be on
netmon.Monitor, removing old APIs.

Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299

Change-Id: If0de137e0e2e145520f69e258597fb89cf39a2a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 7f587d0321 health, wgengine/magicsock: remove last of health package globals
Fixes #11874
Updates #4136

Change-Id: Ib70e6831d4c19c32509fe3d7eee4aa0e9f233564
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Jonathan Nobels 71e9258ad9
ipn/ipnlocal: fix null dereference for early suggested exit node queries (#11885)
Fixes tailscale/corp#19558

A request for the suggested exit nodes that occurs too early in the
VPN lifecycle would result in a null deref of the netmap and/or
the netcheck report.  This checks both and errors out.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
3 weeks ago
Brad Fitzpatrick 745931415c health, all: remove health.Global, finish plumbing health.Tracker
Updates #11874
Updates #4136

Change-Id: I414470f71d90be9889d44c3afd53956d9f26cd61
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick a4a282cd49 control/controlclient: plumb health.Tracker
Updates #11874
Updates #4136

Change-Id: Ia941153bd83523f0c8b56852010f5231d774d91a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 6d69fc137f ipn/{ipnlocal,localapi},wgengine{,/magicsock}: plumb health.Tracker
Down to 25 health.Global users. After this remains controlclient &
net/dns & wgengine/router.

Updates #11874
Updates #4136

Change-Id: I6dd1856e3d9bf523bdd44b60fb3b8f7501d5dc0d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Irbe Krumina df8f40905b
cmd/k8s-operator,k8s-operator: optionally serve tailscaled metrics on Pod IP (#11699)
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).

The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.

Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
Brad Fitzpatrick 723c775dbb tsd, ipnlocal, etc: add tsd.System.HealthTracker, start some plumbing
This adds a health.Tracker to tsd.System, accessible via
a new tsd.System.HealthTracker method.

In the future, that new method will return a tsd.System-specific
HealthTracker, so multiple tsnet.Servers in the same process are
isolated. For now, though, it just always returns the temporary
health.Global value. That permits incremental plumbing over a number
of changes. When the second to last health.Global reference is gone,
then the tsd.System.HealthTracker implementation can return a private
Tracker.

The primary plumbing this does is adding it to LocalBackend and its
dozen and change health calls. A few misc other callers are also
plumbed. Subsequent changes will flesh out other parts of the tree
(magicsock, controlclient, etc).

Updates #11874
Updates #4136

Change-Id: Id51e73cfc8a39110425b6dc19d18b3975eac75ce
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick cb66952a0d health: permit Tracker method calls on nil receiver
In prep for tsd.System Tracker plumbing throughout tailscaled,
defensively permit all methods on Tracker to accept a nil receiver
without crashing, lest I screw something up later. (A health tracking
system that itself causes crashes would be no good.) Methods on nil
receivers should not be called, so a future change will also collect
their stacks (and panic during dev/test), but we should at least not
crash in prod.

This also locks that in with a test using reflect to automatically
call all methods on a nil receiver and check they don't crash.

Updates #11874
Updates #4136

Change-Id: I8e955046ebf370ec8af0c1fb63e5123e6282a9d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Chris Palmer 7349b274bd
safeweb: handle mux pattern collisions more generally (#11801)
Fixes #11800

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
3 weeks ago
Brad Fitzpatrick 5b32264033 health: break Warnable into a global and per-Tracker value halves
Previously it was both metadata about the class of warnable item as
well as the value.

Now it's only metadata and the value is per-Tracker.

Updates #11874
Updates #4136

Change-Id: Ia1ed1b6c95d34bc5aae36cffdb04279e6ba77015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick ebc552d2e0 health: add Tracker type, in prep for removing global variables
This moves most of the health package global variables to a new
`health.Tracker` type.

But then rather than plumbing the Tracker in tsd.System everywhere,
this only goes halfway and makes one new global Tracker
(`health.Global`) that all the existing callers now use.

A future change will eliminate that global.

Updates #11874
Updates #4136

Change-Id: I6ee27e0b2e35f68cb38fecdb3b2dc4c3f2e09d68
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Claire Wang d5fc52a0f5
tailcfg: add auto exit node attribute (#11871)
Updates tailscale/corp#19515

Signed-off-by: Claire Wang <claire@tailscale.com>
3 weeks ago
Sonia Appasamy 18765cd4f9 release/dist/qnap: omit .qpkg.codesigning files
Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
3 weeks ago
Percy Wegmann 955ad12489 ipn/ipnlocal: only show Taildrive peers to which ACLs grant us access
This improves convenience and security.

* Convenience - no need to see nodes that can't share anything with you.
* Security - malicious nodes can't expose shares to peers that aren't
             allowed to access their shares.

Updates tailscale/corp#19432

Signed-off-by: Percy Wegmann <percy@tailscale.com>
3 weeks ago
Sonia Appasamy 5d4b4ffc3c release/dist/qnap: update perms for tmpDir files
Allows all users to read all files, and .sh/.cgi files to be
executable.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
3 weeks ago
Lee Briggs 14ac41febc
cmd/k8s-operator,k8s-operator: proxyclass affinity (#11862)
add ability to set affinity rules to proxyclass

Updates#11861

Signed-off-by: Lee Briggs <lee@leebriggs.co.uk>
3 weeks ago
Anton Tolchanov 31e6bdbc82 ipn/ipnlocal: always stop the engine on auth when key has expired
If seamless key renewal is enabled, we typically do not stop the engine
(deconfigure networking). However, if the node key has expired there is
no point in keeping the connection up, and it might actually prevent
key renewal if auth relies on endpoints routed via app connectors.

Fixes tailscale/corp#5800

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
3 weeks ago
Andrea Gottardo 1d3e77f373
util/syspolicy: add ReadStringArray interface (#11857)
Fixes tailscale/corp#19459

This PR adds the ability for users of the syspolicy handler to read string arrays from the MDM solution configured on the system.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
3 weeks ago
Sonia Appasamy 0cce456ee5 release/dist/qnap: use tmp file directory for qpkg building
This change allows for the release/dist/qnap package to be used
outside of the tailscale repo (notably, will be used from corp),
by using an embedded file system for build files which gets
temporarily written to a new folder during qnap build runs.

Without this change, when used from corp, the release/dist/qnap
folder will fail to be found within the corp repo, causing
various steps of the build to fail.

The file renames in this change are to combine the build files
into a /files folder, separated into /scripts and /Tailscale.

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
3 weeks ago
Percy Wegmann c8e912896e wgengine/router: consolidate routes before reconfiguring router for mobile clients
This helps reduce memory pressure on tailnets with large numbers
of routes.

Updates tailscale/corp#19332

Signed-off-by: Percy Wegmann <percy@tailscale.com>
3 weeks ago
Irbe Krumina add62af7c6
util/linuxfw,go.{mod,sum}: don't log errors when deleting non-existant chains and rules (#11852)
This PR bumps iptables to a newer version that has a function to detect
'NotExists' errors and uses that function to determine whether errors
received on iptables rule and chain clean up are because the rule/chain
does not exist- if so don't log the error.

Updates corp#19336

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
Irbe Krumina 3af0f526b8
cmd{containerboot,k8s-operator},util/linuxfw: support ExternalName Services (#11802)
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name

Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.

Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.

* cmd/k8s-operator: support ExternalName Services

 Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.

Updates tailscale/tailscale#10606

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
3 weeks ago
License Updater bf46bff678 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
3 weeks ago
Percy Wegmann b7e5122226 util/osuser: add unit test for parseGroupIds
Updates #11682

Signed-off-by: Percy Wegmann <percy@tailscale.com>
3 weeks ago
Andrew Dunham e985c6e58f ssh/tailssh: try fetching group IDs for user with the 'id' command
Since the tailscaled binaries that we distribute are static and don't
link cgo, we previously wouldn't fetch group IDs that are returned via
NSS. Try shelling out to the 'id' command, similar to how we call
'getent', to detect such cases.

Updates #11682

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9bdc938bd76c71bc130d44a97cc2233064d64799
3 weeks ago
Kristoffer Dalby 9779eb6dba api.md: move device posture api to api.md
Updates tailscale/corp#18572

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
3 weeks ago
Brad Fitzpatrick c07aa2cfed syncs: fix flaky test by deleting the code it tested (Watch)
Fixes #11766

Change-Id: Id5a875aab23eb1b48a57dc379d0cdd42412fd18b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Joe Tsai 63b3c82587
ipn/local: log OS-specific diagnostic information as JSON (#11700)
There is an undocumented 16KiB limit for text log messages.
However, the limit for JSON messages is 256KiB.
Even worse, logging JSON as text results in significant overhead
since each double quote needs to be escaped.

Instead, use logger.Logf.JSON to explicitly log the info as JSON.

We also modify osdiag to return the information as structured data
rather than implicitly have the package log on our behalf.
This gives more control to the caller on how to log.

Updates #7802

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
3 weeks ago
Andrew Lytvynov 06502b9048
ipn/ipnlocal: reset auto-updates if unsupported on profile load (#11838)
Prior to
1613b18f82 (diff-314ba0d799f70c8998940903efb541e511f352b39a9eeeae8d475c921d66c2ac),
nodes could set AutoUpdate.Apply=true on unsupported platforms via
`EditPrefs`. Specifically, this affects tailnets where default
auto-updates are on.

Fix up those invalid prefs on profile reload, as a migration.

Updates #11544

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Sonia Appasamy 0a84215036 release/dist/qnap: add qnap target builder
Creates new QNAP builder target, which builds go binaries then uses
docker to build into QNAP packages. Much of the docker/script code
here is pulled over from https://github.com/tailscale/tailscale-qpkg,
with adaptation into our builder structures.

The qnap/Tailscale folder contains static resources needed to build
Tailscale qpkg packages, and is an exact copy of the existing folder
in the tailscale-qpkg repo.

Builds can be run with:
```
sudo ./tool/go run ./cmd/dist build qnap
```

Updates tailscale/tailscale-qpkg#135

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
3 weeks ago
Andrew Lytvynov b743b85dad
ipn/ipnlocal,ssh/tailssh: reject c2n /update if SSH conns are active (#11820)
Since we already track active SSH connections, it's not hard to
proactively reject updates until those finish. We attempt to do the same
on the control side, but the detection latency for new connections is in
the minutes, which is not fast enough for common short sessions.

Handle a `force=true` query parameter to override this behavior, so that
control can still trigger an update on a server where some long-running
abandoned SSH session is open.

Updates https://github.com/tailscale/corp/issues/18556

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Brad Fitzpatrick 5100bdeba7 types/persist: remove unused field Persist.Provider
It was only obviously unused after the previous change, c39cde79d.

Updates #19334

Change-Id: I9896d5fa692cb4346c070b4a339d0d12340c18f7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick c39cde79d2 tailcfg: remove some unused fields from RegisterResponseAuth
Fixes #19334

Change-Id: Id6463f28af23078a7bc25b9280c99d4491bd9651
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Brad Fitzpatrick 05bfa022f2 tailcfg: pointerify RegisterRequest.Auth, omitemptify RegisterResponseAuth
We were storing server-side lots of:

    "Auth":{"Provider":"","LoginName":"","Oauth2Token":null,"AuthKey":""},

That was about 7% of our total storage of pending RegisterRequest
bodies.

Updates tailscale/corp#19327

Change-Id: Ib73842759a2b303ff5fe4c052a76baea0d68ae7d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Andrew Dunham 375617c5c8 net/tsdial: assume all connections are affected if no default route is present
If this happens, it results in us pessimistically closing more
connections than might be necessary, but is more correct since we won't
"miss" a change to the default route interface and keep trying to send
data over a nonexistent interface, or one that can't reach the internet.

Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia0b8b04cb8cdcb0da0155fd08751c9dccba62c1a
4 weeks ago
Nick Khyl 9e1c86901b wgengine\router: fix the Tailscale-In firewall rule to work on domain networks
The Network Location Awareness service identifies networks authenticated against
an Active Directory domain and categorizes them as "Domain Authenticated".
This includes the Tailscale network if a Domain Controller is reachable through it.

If a network is categories as NLM_NETWORK_CATEGORY_DOMAIN_AUTHENTICATED,
it is not possible to override its category, and we shouldn't attempt to do so.
Additionally, our Windows Firewall rules should be compatible with both private
and domain networks.

This fixes both issues.

Fixes #11813

Signed-off-by: Nick Khyl <nickk@tailscale.com>
4 weeks ago
Andrew Lytvynov bff527622d
ipn/ipnlocal,clientupdate: disallow auto-updates in containers (#11814)
Containers are typically immutable and should be updated as a whole (and
not individual packages within). Deny enablement of auto-updates in
containers.

Also, add the missing check in EditPrefs in LocalAPI, to catch cases
like tailnet default auto-updates getting enabled for nodes that don't
support it.

Updates #11544

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
4 weeks ago
Andrew Lytvynov b3fb3bf084
clientupdate: return OS-specific version from LatestTailscaleVersion (#11812)
We don't always have the same latest version for all platforms (like
with 1.64.2 is only Synology+Windows), so we should use the OS-specific
result from pkgs JSON response instead of the main Version field.

Updates #11795

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
4 weeks ago
Irbe Krumina bbe194c80d
cmd/k8s-operator: correctly determine cluster domain (#11512)
Kubernetes cluster domain defaults to 'cluster.local', but can also be customized.
We need to determine cluster domain to set up in-cluster forwarding to our egress proxies.
This was previously hardcoded to 'cluster.local', so was the egress proxies were not usable in clusters with custom domains.
This PR ensures that we attempt to determine the cluster domain by parsing /etc/resolv.conf.
In case the cluster domain cannot be determined from /etc/resolv.conf, we fall back to 'cluster.local'.

Updates tailscale/tailscale#10399,tailscale/tailscale#11445

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
4 weeks ago
Percy Wegmann d16c1293e9 ipn/ipnlocal: remove origin and referer headers from Taildrive requests
peerapi does not want these, but rclone includes them.
Removing them allows rclone to work with Taildrive configured
as a WebDAV remote.

Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago
Percy Wegmann 94c0403104 ipn/ipnlocal: strip origin and referer headers from Taildrive requests
peerapi does not want these, but rclone includes them.
Stripping them out allows rclone to work with Taildrive configured
as a WebDAV remote.

Updates #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago
Percy Wegmann 787f8c08ec drive: rewrite Location headers
This ensures that MOVE, LOCK and any other verbs that use the Location
header work correctly.

Fixes #11758

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago
Claire Wang c24f2eee34
tailcfg: rename exit node destination network flow log node attribute (#11779)
Updates tailscale/corp#18625

Signed-off-by: Claire Wang <claire@tailscale.com>
4 weeks ago
kari-ts 048cb61dd0
interfaces: create android impl (#11784)
-Move Android impl into interfaces_android.go
-Instead of using ip route to get the interface name, use the one passed in by Android (ip route is restricted in Android 13+ per termux/termux-app#2993)

Follow-up will be to do the same for router

Fixes tailscale/corp#19215
Fixes tailscale/corp#19124

Signed-off-by: kari-ts <kari@tailscale.com>
4 weeks ago
Aaron Klotz 7132b782d4 hostinfo: use Distro field for distinguishing Windows Server builds
Some editions of Windows server share the same build number as their
client counterpart; we must use an additional field found in the OS
version information to distinguish between them.

Even though "Distro" has Linux connotations, it is the most appropriate
hostinfo field. What is Windows Server if not an alternate distribution
of Windows? This PR populates Distro with "Server" when applicable.

Fixes #11785

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
4 weeks ago
Percy Wegmann 02c6af2a69 cmd/tailscale: clarify Taildrive grants in help text
Fixes #cleanup

Signed-off-by: Percy Wegmann <percy@tailscale.com>
4 weeks ago
Chris Palmer bdfaef4879
safeweb: allow object-src: self in CSP (#11782)
This change is safe (self is still safe, by
definition), and makes the code match the comment.

Updates #cleanup

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
4 weeks ago
Andrew Lytvynov e775de3c63
go.mod: bump golang.org/x/net (#11775)
One more place to pick up a fix for
https://pkg.go.dev/vuln/GO-2024-2687.

Updates https://github.com/tailscale/corp/issues/18893

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
4 weeks ago
Adrian Dewhurst c8b0adb382 docs/windows/policy: add missing key expiration warning interval
Fixes #11345

Change-Id: Ib53b639690b77d1b7d857304dca2119f197227ce
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
4 weeks ago
Brad Fitzpatrick 03d5d1f0f9 wgengine/magicsock: disable portmapper in tunchan-faked tests
Most of the magicsock tests fake the network, simulating packets going
out and coming in. There's no reason to actually hit your router to do
UPnP/NAT-PMP/PCP during in tests. But while debugging thousands of
iterations of tests to deflake some things, I saw it slamming my
router. This stops that.

Updates #11762

Change-Id: I59b9f48f8f5aff1fa16b4935753d786342e87744
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Andrew Lytvynov 22bd506129
ipn/ipnlocal: hold the mutex when in onTailnetDefaultAutoUpdate (#11786)
Turns out, profileManager is not safe for concurrent use and I missed
all the locking infrastructure in LocalBackend, oops.

I was not able to reproduce the race even with `go test -count 100`, but
this seems like an obvious fix.

Fixes #11773

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
4 weeks ago
Chris Palmer 88a7767492
safeweb: set SameSite=Strict, with an option for Lax (#11781)
Fixes #11780

Signed-off-by: Chris Palmer <cpalmer@tailscale.com>
4 weeks ago
dependabot[bot] dd48cad89a build(deps-dev): bump vite from 5.1.4 to 5.1.7 in /client/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.1.4 to 5.1.7.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.1.7/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.1.7/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
4 weeks ago
Andrew Dunham b85c2b2313 net/dns/resolver: use SystemDial in DoH forwarder
This ensures that we close the underlying connection(s) when a major
link change happens. If we don't do this, on mobile platforms switching
between WiFi and cellular can result in leftover connections in the
http.Client's connection pool which are bound to the "wrong" interface.

Updates #10821
Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ibd51ce2efcaf4bd68e14f6fdeded61d4e99f9a01
4 weeks ago
Paul Scott 82394debb7 cmd/tailscale: add shell tab-completion
The approach is lifted from cobra: `tailscale completion bash` emits a bash
script for configuring the shell's autocomplete:

    . <( tailscale completion bash )

so that typing:

    tailscale st<TAB>

invokes:

    tailscale completion __complete -- st

RELNOTE=tailscale CLI now supports shell tab-completion

Fixes #3793

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Brad Fitzpatrick 21a0fe1b9b ipn/store: omit AWS & Kubernetes support on 'small' Linux GOARCHes
This removes AWS and Kubernetes support from Linux binaries by default
on GOARCH values where people don't typically run on AWS or use
Kubernetes, such as 32-bit mips CPUs.

It primarily focuses on optimizing for the static binaries we
distribute. But for people building it themselves, they can set
ts_kube or ts_aws (the opposite of ts_omit_kube or ts_omit_aws) to
force it back on.

Makes tailscaled binary ~2.3MB (~7%) smaller.

Updates #7272, #10627 etc

Change-Id: I42a8775119ce006fa321462cb2d28bc985d1c146
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
dependabot[bot] 449be38e03
build(deps): bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#11410)
* build(deps): bump google.golang.org/protobuf from 1.32.0 to 1.33.0

Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* cmd/{derper,stund}: update depaware.txt

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Lytvynov <awly@tailscale.com>
4 weeks ago
Irbe Krumina 3ef7f895c8
go.{mod,sum}: bump nftables to the latest commit (#11772)
Updates#deps

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
4 weeks ago
Andrew Dunham 226486eb9a net/interfaces: handle removed interfaces in State.Equal
This wasn't previously handling the case where an interface in s2 was
removed and not present in s1, and would cause the Equal method to
incorrectly return that the states were equal.

Updates tailscale/corp#19124

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I3af22bc631015d1ddd0a1d01bfdf312161b9532d
4 weeks ago
Paul Scott 454a03a766 cmd/tailscale/cli: prepend "tailscale" to usage errors
Updates #11626

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Paul Scott d07ede461a cmd/tailscale/cli: fix "subcommand required" errors when typod
Fixes #11672

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Paul Scott 3ff3445e9d cmd/tailscale/cli: improve ShortHelp/ShortUsage unit test, fix new errors
Updates #11364

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Paul Scott eb34b8a173 cmd/tailscale/cli: remove explicit usageFunc - its default
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Paul Scott a50e4e604e cmd/tailscale/cli: remove duplicate "tailscale " in drive subcmd usage
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Paul Scott 62d4be873d cmd/tailscale/cli: fix drive --help usage identation
Updates #cleanup

Signed-off-by: Paul Scott <paul@tailscale.com>
4 weeks ago
Brad Fitzpatrick 7c1d6e35a5 all: use Go 1.22 range-over-int
Updates #11058

Change-Id: I35e7ef9b90e83cac04ca93fd964ad00ed5b48430
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 068db1f972 net/interfaces: delete unused unexported function
It should've been deleted in 11ece02f52.

Updates #9040

Change-Id: If8a136bdb6c82804af658c9d2b0a8c63ce02d509
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Jonathan Nobels 7e2b4268d6
ipn/{localapi, ipnlocal}: forget the prior exit node when localAPI is used to zero the ExitNodeID (#11681)
Updates tailscale/corp#18724

When localAPI clients directly set ExitNodeID to "", the expected behaviour is that the prior exit node also gets zero'd - effectively setting the UI state back to 'no exit node was ever selected'

The IntenalExitNodePrior has been changed to be a non-opaque type, as it is read by the UI to render the users last selected exit node, and must be concrete. Future-us can either break this, or deprecate it and replace it with something more interesting.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
1 month ago
Brad Fitzpatrick 0fba9e7570 cmd/tailscale/cli: prevent concurrent Start calls in 'up'
Seems to deflake tstest/integration tests. I can't reproduce it
anymore on one of my VMs that was consistently flaking after a dozen
runs before. Now I can run hundreds of times.

Updates #11649
Fixes #7036

Change-Id: I2f7d4ae97500d507bdd78af9e92cd1242e8e44b8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Irbe Krumina 26f9bbc02b
cmd/k8s-operator,k8s-operator: document tailscale.com Custom Resource Definitions better. (#11665)
Updates tailscale/tailscale#10880

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Adrian Dewhurst ca5cb41b43 tailcfg: document use of CapMap for peers
Updates tailscale/corp#17516
Updates #11508

Change-Id: Iad2dafb38ffb9948bc2f3dfaf9c268f7d772cf56
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
1 month ago
Brad Fitzpatrick 3c1e2bba5b ipn/ipnlocal: remove outdated iOS hacky workaround in Start
We haven't needed this hack for quite some time Andrea says.

Updates #11649

Change-Id: Ie854b7edd0a01e92495669daa466c7c0d57e7438
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick dd6c76ea24 ipn: remove unused Options.LegacyMigrationPrefs
I'm on a mission to simplify LocalBackend.Start and its locking
and deflake some tests.

I noticed this hasn't been used since March 2023 when it was removed
from the Windows client in corp 66be796d33c.

So, delete.

Updates #11649

Change-Id: I40f2cb75fb3f43baf23558007655f65a8ec5e1b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 7ec0dc3834 ipn/ipnlocal: make StartLoginInteractive take (yet unused) context
In prep for future fix to undermentioned issue.

Updates tailscale/tailscale#7036

Change-Id: Ide114db917dcba43719482ffded6a9a54630d99e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Claire Wang 9171b217ba
cmd/tailscale, ipn/ipnlocal: add suggest exit node CLI option (#11407)
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
1 month ago
Charlotte Brandhorst-Satzkorn 449f46c207
wgengine/magicsock: rebind/restun if a syscall.EPERM error is returned (#11711)
We have seen in macOS client logs that the "operation not permitted", a
syscall.EPERM error, is being returned when traffic is attempted to be
sent. This may be caused by security software on the client.

This change will perform a rebind and restun if we receive a
syscall.EPERM error on clients running darwin. Rebinds will only be
called if we haven't performed one specifically for an EPERM error in
the past 5 seconds.

Updates #11710

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
1 month ago
Will Norris 14c8b674ea Revert "licenses: add gliderlabs/ssh license"
The gliderlabs/ssh license is actually already included in the standard
package listing.  I'm not sure why I thought it wasn't.

Updates tailscale/corp#5780

This reverts commit 11dca08e93.

Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
Brad Fitzpatrick 952e06aa46 wgengine/router: don't attempt route cleanup on Synology
Trying to run iptables/nftables on Synology pauses for minutes with
lots of errors and ultimately does nothing as it's not used and we
lack permissions.

This fixes a regression from db760d0bac (#11601) that landed
between Synology testing on unstable 1.63.110 and 1.64.0 being cut.

Fixes #11737

Change-Id: Iaf9563363b8e45319a9b6fe94c8d5ffaecc9ccef
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Irbe Krumina 38fb23f120
cmd/k8s-operator,k8s-operator: allow users to configure proxy env vars via ProxyClass (#11743)
Adds new ProxyClass.spec.statefulSet.pod.{tailscaleContainer,tailscaleInitContainer}.Env field
that allow users to provide key, value pairs that will be set as env vars for the respective containers.
Allow overriding all containerboot env vars,
but warn that this is not supported and might break (in docs + a warning when validating ProxyClass).

Updates tailscale/tailscale#10709

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Brad Fitzpatrick 9258bcc360 Makefile: fix default SYNO_ARCH in Makefile
It was broken with the move to dist in 32e0ba5e68 which doesn't accept
amd64 anymore.

Updates #cleanup

Change-Id: Iaaaba2d73c6a09a226934fe8e5c18b16731ee7a6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick b9aa7421d6 ipn/ipnlocal: remove some dead code (legacyBackend methods) from LocalBackend
Nothing used it.

Updates #11649

Change-Id: Ic1c331d947974cd7d4738ff3aafe9c498853689e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick a6739c49df paths: set default state path on AIX
Updates #11361

Change-Id: I196727a540be6b7c75303f9958490b1d76189fd6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 271cfdb3d3 util/syspolicy: clean up doc grammar and consistency
Updates #cleanup

Change-Id: I912574cbd5ef4d8b7417b8b2a9b9a2ccfef88840
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick bad3159b62 ipn/ipnlocal: delete useless SetControlClientGetterForTesting use
Updates #11649

Change-Id: I56c069b9c97bd3e30ff87ec6655ec57e1698427c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 8186cd0349 ipn/ipnlocal: delete redundant TestStatusWithoutPeers
We have tstest/integration nowadays.

And this test was one of the lone holdouts using the to-be-nuked
SetControlClientGetterForTesting.

Updates #11649

Change-Id: Icf8a6a2e9b8ae1ac534754afa898c00dc0b7623b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 68043a17c2 ipn/ipnlocal: centralize assignments to cc + ccAuto in new method
cc vs ccAuto is a mess. It needs to go. But this is a baby step towards
getting there.

Updates #11649

Change-Id: I34f33934844e580bd823a7d8f2b945cf26c87b3b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 970b1e21d0 ipn/ipnlocal: inline assertClientLocked into its now sole caller
Updates #11649

Change-Id: I8e2a5e59125a0cad5c0a8c9ed8930585f1735d03
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 170c618483 ipn/ipnlocal: remove dead code now that Android uses LocalAPI instead
The new Android app and its libtailscale don't use this anymore;
it uses LocalAPI like other clients now.

Updates #11649

Change-Id: Ic9f42b41e0e0280b82294329093dc6c275f41d50
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Flakes Updater 65f215115f go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
1 month ago
Brad Fitzpatrick a1abd12f35 cmd/tailscaled, net/tstun: build for aix/ppc64
At least in userspace-networking mode.

Fixes #11361

Change-Id: I78d33f0f7e05fe9e9ee95b97c99b593f8fe498f2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
kari-ts 1cd51f95c7
ipnlocal: enable allow LAN for android (#11709)
Updates tailscale/corp#18984
Updates tailscale/corp#18202
1 month ago
Claire Wang 976d3c7b5f
tailcfg: add exit destination for network flow logs node attribute (#11698)
Updates tailscale/corp#18625

Signed-off-by: Claire Wang <claire@tailscale.com>
1 month ago
Joe Tsai 7a77a2edf1
logtail: optimize JSON processing (#11671)
Changes made:

* Avoid "encoding/json" for JSON processing, and instead use
"github.com/go-json-experiment/json/jsontext".
Use jsontext.Value.IsValid for validation, which is much faster.
Use jsontext.AppendQuote instead of our own JSON escaping.

* In drainPending, use a different maxLen depending on lowMem.
In lowMem mode, it is better to perform multiple uploads
than it is to construct a large body that OOMs the process.

* In drainPending, if an error is encountered draining,
construct an error message in the logtail JSON format
rather than something that is invalid JSON.

* In appendTextOrJSONLocked, use jsontext.Decoder to check
whether the input is a valid JSON object. This is faster than
the previous approach of unmarshaling into map[string]any and
then re-marshaling that data structure.
This is especially beneficial for network flow logging,
which produces relatively large JSON objects.

* In appendTextOrJSONLocked, enforce maxSize on the input.
If too large, then we may end up in a situation where the logs
can never be uploaded because it exceeds the maximum body size
that the Tailscale logs service accepts.

* Use "tailscale.com/util/truncate" to properly truncate a string
on valid UTF-8 boundaries.

* In general, remove unnecessary spaces in JSON output.

Performance:

    name       old time/op    new time/op    delta
    WriteText     776ns ± 2%     596ns ± 1%   -23.24%  (p=0.000 n=10+10)
    WriteJSON     110µs ± 0%       9µs ± 0%   -91.77%  (p=0.000 n=8+8)

    name       old alloc/op   new alloc/op   delta
    WriteText      448B ± 0%        0B       -100.00%  (p=0.000 n=10+10)
    WriteJSON    37.9kB ± 0%     0.0kB ± 0%   -99.87%  (p=0.000 n=10+10)

    name       old allocs/op  new allocs/op  delta
    WriteText      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
    WriteJSON     1.08k ± 0%     0.00k ± 0%   -99.91%  (p=0.000 n=10+10)

For text payloads, this is 1.30x faster.
For JSON payloads, this is 12.2x faster.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Aaron Klotz 4d5d669cd5 net/dns: unconditionally write NRPT rules to local settings
We were being too aggressive when deciding whether to write our NRPT rules
to the local registry key or the group policy registry key.

After once again reviewing the document which calls itself a spec
(see issue), it is clear that the presence of the DnsPolicyConfig subkey
is the important part, not the presence of values set in the DNSClient
subkey. Furthermore, a footnote indicates that the presence of
DnsPolicyConfig in the GPO key will always override its counterpart in
the local key. The implication of this is important: we may unconditionally
write our NRPT rules to the local key. We copy our rules to the policy
key only when it contains NRPT rules belonging to somebody other than us.

Fixes https://github.com/tailscale/corp/issues/19071

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
1 month ago
License Updater 9d021579e7 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
1 month ago
Will Norris 11dca08e93 licenses: add gliderlabs/ssh license
This package is included in the tempfork directory, rather than as a go
module dependency, so is not included in the normal package list.

Updates tailscale/corp#5780

Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
Jenny Zhang 2207643312 VERSION.txt: this is v1.65.0
Signed-off-by: Jenny Zhang <jz@tailscale.com>
1 month ago
Jenny Zhang 09524b58f3 VERSION.txt: this is v1.64.0
Signed-off-by: Jenny Zhang <jz@tailscale.com>
1 month ago
James Tucker a2eb1c22b0 wgengine/magicsock: allow disco communication without known endpoints
Just because we don't have known endpoints for a peer does not mean that
the peer should become unreachable. If we know the peers key, it should
be able to call us, then we can talk back via whatever path it called us
on. First step - don't drop the packet in this context.

Updates tailscale/corp#19106

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
Patrick O'Doherty 7f4cda23ac
scripts/installer.sh: add rpm GPG key import (#11686)
Extend the `zypper` install to import importing the GPG key used to sign
the repository packages.

Updates #11635

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
1 month ago
James Tucker 8fa3026614 tsweb: switch to fastuuid for request ID generation
Request ID generation appears prominently in some services cumulative
allocation rate, and while this does not eradicate this issue (the API
still makes UUID objects), it does improve the overhead of this API and
reduce the amount of garbage that it produces.

Updates tailscale/corp#18266
Updates tailscale/corp#19054

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
James Tucker d0f3fa7d7e util/fastuuid: add a more efficient uuid generator
This still generates github.com/google/uuid UUID objects, but does so
using a ChaCha8 CSPRNG from the stdlib rand/v2 package. The public API
is backed by a sync.Pool to provide good performance in highly
concurrent operation.

Under high load the read API produces a lot of extra garbage and
overhead by way of temporaries and syscalls. This implementation reduces
both to minimal levels, and avoids any long held global lock by
utilizing sync.Pool.

Updates tailscale/corp#18266
Updates tailscale/corp#19054

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
James Tucker db760d0bac cmd/tailscaled: move cleanup to an implicit action during startup
This removes a potentially increased boot delay for certain boot
topologies where they block on ExecStartPre that may have socket
activation dependencies on other system services (such as
systemd-resolved and NetworkManager).

Also rename cleanup to clean up in affected/immediately nearby places
per code review commentary.

Fixes #11599

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
Nick Khyl 8d83adde07 util/winutil/winenv: add package for current Windows environment details
Package winenv provides information about the current Windows environment.
This includes details such as whether the device is a server or workstation,
and if it is AD domain-joined, MDM-registered, or neither.

Updates tailscale/corp#18342

Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 month ago
Paul Scott da4e92bf01 cmd/tailscale/cli: prefix all --help usages with "tailscale ...", some tidying
Also capitalises the start of all ShortHelp, allows subcommands to be hidden
with a "HIDDEN: " prefix in their ShortHelp, and adds a TS_DUMP_HELP envknob
to look at all --help messages together.

Fixes #11664

Signed-off-by: Paul Scott <paul@tailscale.com>
1 month ago
Percy Wegmann 9da135dd64 cmd/tailscale/cli: moved share.go to drive.go
Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
1 month ago
Percy Wegmann 1e0ebc6c6d cmd/tailscale/cli: rename share command to drive
Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
1 month ago
Joe Tsai b4ba492701
logtail: require Buffer.Write to not retain the provided slice (#11617)
Buffer.Write has the exact same signature of io.Writer.Write.
The latter requires that implementations to never retain
the provided input buffer, which is an expectation that most
users will have when they see a Write signature.

The current behavior of Buffer.Write where it does retain
the input buffer is a risky precedent to set.
Switch the behavior to match io.Writer.Write.

There are only two implementations of Buffer in existence:
* logtail.memBuffer
* filch.Filch

The former can be fixed by cloning the input to Write.
This will cause an extra allocation in every Write,
but we can fix that will pooling on the caller side
in a follow-up PR.

The latter only passes the input to os.File.Write,
which does respect the io.Writer.Write requirements.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Irbe Krumina 231e44e742
Revert "cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11017)" (#11669)
Temporarily reverting this PR to avoid releasing
half finished featue.

This reverts commit 9e2f58f846.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Andrea Gottardo 0001237253 docs/policy: update ADMX and ADML files with new Windows 1.62 syspolicies
Updates ENG-2776

Updates the .admx and .adml files to include the new ManagedByOrganizationName, ManagedByCaption and ManagedByURL system policies, added in Tailscale v1.62 for Windows.

Co-authored-by: Andrea Gottardo <andrea@gottardo.me>
Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 month ago
Brad Fitzpatrick b27238b654 derp/derphttp: don't block in LocalAddr method
The derphttp.Client mutex is held during connects (for up to 10
seconds) so this LocalAddr method (blocking on said mutex) could also
block for up to 10 seconds, causing a pileup upstream in
magicsock/wgengine and ultimately a watchdog timeout resulting in a
crash.

Updates #11519

Change-Id: Idd1d94ee00966be1b901f6899d8b9492f18add0f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick e6983baa73
cmd/tailscale/cli: fix macOS crash reading envknob in init (#11667)
And add a test.

Regression from a5e1f7d703

Fixes tailscale/corp#19036

Change-Id: If90984049af0a4820c96e1f77ddf2fce8cb3043f

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Chloé Vulquin 0f3a292ebd
cli/configure: respect $KUBECONFIG (#11604)
cmd/tailscale/cli: respect $KUBECONFIG

* `$KUBECONFIG` is a `$PATH`-like: it defines a *list*.
`tailscale config kubeconfig` works like the rest of the
ecosystem so that if $KUBECONFIG is set it will write to the first existant file in the list, if none exist then
the final entry in the list.
* if `$KUBECONFIG` is an empty string, the old logic takes over.

Notes:

* The logic for file detection is inlined based on what `kind` does.
Technically it's a race condition, since the file could be removed/added
in between the processing steps, but the fallout shouldn't be too bad.
https://github.com/kubernetes-sigs/kind/blob/v0.23.0-alpha/pkg/cluster/internal/kubeconfig/internal/kubeconfig/paths.go

* The sandboxed (App Store) variant relies on a specific temporary
entitlement to access the ~/.kube/config file.
The entitlement is only granted to specific files, and so is not
applicable to paths supplied by the user at runtime.
While there may be other ways to achieve this access to arbitrary
kubeconfig files, it's out of scope for now.

Updates #11645

Signed-off-by: Chloé Vulquin <code@toast.bunkerlabs.net>
1 month ago
Brad Fitzpatrick c71e8db058 cmd/tailscale/cli: stop spamming os.Stdout/os.Stderr in tests
After:

    bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
    bradfitz@book1pro tailscale.com % ./cli.test
    bradfitz@book1pro tailscale.com %

Before:

    bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
    bradfitz@book1pro tailscale.com % ./cli.test

    Warning: funnel=on for foo.test.ts.net:443, but no serve config
             run: `tailscale serve --help` to see how to configure handlers

    Warning: funnel=on for foo.test.ts.net:443, but no serve config
             run: `tailscale serve --help` to see how to configure handlers
    USAGE
      funnel <serve-port> {on|off}
      funnel status [--json]

    Funnel allows you to publish a 'tailscale serve'
    server publicly, open to the entire internet.

    Turning off Funnel only turns off serving to the internet.
    It does not affect serving to your tailnet.

    SUBCOMMANDS
      status  show current serve/funnel status
    error: path must be absolute

    error: invalid TCP source "localhost:5432": missing port in address

    error: invalid TCP source "tcp://somehost:5432"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:5432error: invalid TCP source "tcp://somehost:0"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:0error: invalid TCP source "tcp://somehost:65536"
    must be one of: localhost or 127.0.0.1

    tcp://somehost:65536error: path must be absolute

    error: cannot serve web; already serving TCP

    You don't have permission to enable this feature.

This also moves the color handling up to a generic spot so it's
not just one subcommand doing it itself. See
https://github.com/tailscale/tailscale/issues/11626#issuecomment-2041795129

Fixes #11643
Updates #11626

Change-Id: I3a49e659dcbce491f4a2cb784be20bab53f72303
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Anton Tolchanov 5336362e64 prober: export probe class and metrics from bandwidth prober
- Wrap each prober function into a probe class that allows associating
  metric labels and custom metrics with a given probe;
- Make sure all existing probe classes set a `class` metric label;
- Move bandwidth probe size from being a metric label to a separate
  gauge metric; this will make it possible to use it to calculate
  average used bandwidth using a PromQL query;
- Also export transfer time for the bandwidth prober (more accurate than
  the total probe time, since it excludes connection establishment
  time).

Updates tailscale/corp#17912

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
1 month ago
Anton Tolchanov 21671ca374 prober: remove unused notification code
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
1 month ago
Brad Fitzpatrick b0fbd85592 net/tsdial: partially fix "tailscale nc" (UserDial) on macOS
At least in the case of dialing a Tailscale IP.

Updates #4529

Change-Id: I9fd667d088a14aec4a56e23aabc2b1ffddafa3fe
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick a5e1f7d703 ipn/{ipnlocal,localapi}: add API to toggle use of exit node
This is primarily for GUIs, so they don't need to remember the most
recently used exit node themselves.

This adds some CLI commands, but they're disabled and behind the WIP
envknob, as we need to consider naming (on/off is ambiguous with
running an exit node, etc) as well as automatic exit node selection in
the future. For now the CLI commands are effectively developer debug
things to test the LocalAPI.

Updates tailscale/corp#18724

Change-Id: I9a32b00e3ffbf5b29bfdcad996a4296b5e37be7e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Maisem Ali 3f4c5daa15 wgengine/netstack: remove SubnetRouterWrapper
It was used when we only supported subnet routers on linux
and would nil out the SubnetRoutes slice as no other router
worked with it, but now we support subnet routers on ~all platforms.

The field it was setting to nil is now only used for network logging
and nowhere else, so keep the field but drop the SubnetRouterWrapper
as it's not useful.

Updates #cleanup

Change-Id: Id03f9b6ec33e47ad643e7b66e07911945f25db79
Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 month ago
alexelisenko fe22032fb3
net/dns/{publicdns,resolver}: add start of Control D support
Updates #7946

[@bradfitz fixed up version of #8417]

Change-Id: I1dbf6fa8d525b25c0d7ad5c559a7f937c3cd142a
Signed-off-by: alexelisenko <39712468+alexelisenko@users.noreply.github.com>
Signed-off-by: Alex Paguis <alex@windscribe.com>
1 month ago
Brad Fitzpatrick aa084a29c6 ipn/ipnlocal: name the unlockOnce type, plumb more, add Unlock method
This names the func() that Once-unlocked LocalBackend.mu. It does so
both for docs and because it can then have a method: Unlock, for the
few points that need to explicitly unlock early (the cause of all this
mess). This makes those ugly points easy to find, and also can then
make them stricter, panicking if the mutex is already unlocked. So a
normal call to the func just once-releases the mutex, returning false
if it's already done, but the Unlock method is the strict one.

Then this uses it more, so most the b.mu.Unlock calls remaining are
simple cases and usually defers.

Updates #11649

Change-Id: Ia070db66c54a55e59d2f76fdc26316abf0dd4627
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 5e7c0b025c ipn/ipnlocal: add some "lockedOnEntry" helpers + guardrails, fix bug
A number of methods in LocalBackend (with suffixed "LockedOnEntry")
require b.mu be held but unlock it on the way out. That's asymmetric
and atypical and error prone.

This adds a helper method to LocalBackend that locks the mutex and
returns a sync.OnceFunc that unlocks the mutex. Then we pass around
that unlocker func down the chain to make it explicit (and somewhat
type check the passing of ownership) but also let the caller defer
unlock it, in the case of errors/panics that happen before the callee
gets around to calling the unlock.

This revealed a latent bug in LocalBackend.DeleteProfile which double
unlocked the mutex.

Updates #11649

Change-Id: I002f77567973bd77b8906bfa4ec9a2049b89836a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Flakes Updater efb710d0e5 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
1 month ago
Brad Fitzpatrick 38377c37b5 ipn/localapi: sort localapi handler map keys
Updates #cleanup

Change-Id: I750ed8d033954f1f8786fb35dd16895bb1c5af8e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Maisem Ali 21b32b467e tsweb: handle panics in retHandler
We would have incomplete stats and missing logs in cases
of panics.

Updates tailscale/corp#18687

Signed-off-by: Maisem Ali <maisem@tailscale.com>
1 month ago
Brad Fitzpatrick ac2522092d cmd/tailscale/cli: make exit-node list not random
The output was changing randomly per run, due to range over a map.

Then some misc style tweaks I noticed while debugging.

Fixes #11629

Change-Id: I67aef0e68566994e5744d4828002f6eb70810ee1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
James Tucker 6e334e64a1 net/netcheck,wgengine/magicsock: align DERP frame receive time heuristics
The netcheck package and the magicksock package coordinate via the
health package, but both sides have time based heuristics through
indirect dependencies. These were misaligned, so the implemented
heuristic aimed at reducing DERP moves while there is active traffic
were non-operational about 3/5ths of the time.

It is problematic to setup a good test for this integration presently,
so instead I added comment breadcrumbs along with the initial fix.

Updates #8603

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
Irbe Krumina 1fbaf26106
util/linuxfw: fix chain comparison (#11639)
Don't compare pointer fields by pointer value, but by the actual value

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Charlotte Brandhorst-Satzkorn 8c75da27fc
drive: move normalizeShareName into pkg drive and make func public (#11638)
This change makes the normalizeShareName function public, so it can be
used for validation in control.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
1 month ago
Will Morrison 306bacc669 cmd/tailscale/cli: Add CLI command to update certs on Synology devices.
Fixes #4674

Signed-off-by: Will Morrison <william.barr.morrison@gmail.com>
1 month ago
Brad Fitzpatrick 9699bb0a20 metrics: fix outdated docs on MultiLabelMap
And make NewMultiLabelMap panic earlier (at construction time)
if the comparable struct type T violates the documented rules,
rather than panicking at Add time.

Updates #cleanup

Change-Id: Ib1a03babdd501b8d699c4f18b1097a56c916c6d5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Joonas Kuorilehto fe0cfec4ad wgengine/router: enable ip forwarding on gokrazy
Only on Gokrazy, set sysctls to enable IP forwarding so subnet routing
and advertised exit node works.

Fixes #11405

Signed-off-by: Joonas Kuorilehto <joneskoo@derbian.fi>
1 month ago
Joe Tsai 4bbac72868
util/truncate: support []byte as well (#11614)
There are no mutations to the input,
so we can support both ~string and ~[]byte just fine.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Charlotte Brandhorst-Satzkorn 98cf71cd73
tailscale: switch tailfs to drive syntax for api and logs (#11625)
This change switches the api to /drive, rather than the previous /tailfs
as well as updates the log lines to reflect the new value. It also
cleans up some existing tailfs references.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
1 month ago
Percy Wegmann 853e3e29a0 wgengine/router: provide explicit hook to signal Android when VPN needs to be reconfigured
This allows clients to avoid establishing their VPN multiple times when
both routes and DNS are changing in rapid succession.

Updates tailscale/corp#18928

Signed-off-by: Percy Wegmann <percy@tailscale.com>
1 month ago
Joe Tsai 1a38d2a3b4
util/zstdframe: support specifying a MaxWindowSize (#11595)
Specifying a smaller window size during compression
provides a knob to tweak the tradeoff between memory usage
and the compression ratio.

Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Andrew Dunham 7d7d159824 prober: support creating multiple probes in ForEachAddr
So that we can e.g. check TLS on multiple ports for a given IP.

Updates tailscale/corp#16367

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I81d840a4c88138de1cbb2032b917741c009470e6
1 month ago
Andrew Dunham ac574d875c prober: add helper function to check all IPs for a DNS hostname
This allows us to check all IP addresses (and address families) for a
given DNS hostname while dynamically discovering new IPs and removing
old ones as they're no longer valid.

Also add a testable example that demonstrates how to use it.

Alternative to #11610
Updates tailscale/corp#16367

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I6d6f39bafc30e6dfcf6708185d09faee2a374599
1 month ago
Brad Fitzpatrick 8d7894c68e clientupdate, net/dns: fix some "tailsacle" typos
Updates #cleanup

Change-Id: I982175e74b0c8c5b3e01a573e5785e6596b7ac39
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick 92d3f64e95 go.toolchain.rev: bump to Go 1.22.2
Update tailscale/corp#18893

Change-Id: I4c04f5153ad43429d7f510c9ac2194c3b2fbc6c1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Charlotte Brandhorst-Satzkorn 93618a3518
tailscale: update tailfs functions and vars to use drive naming (#11597)
This change updates all tailfs functions and the majority of the tailfs
variables to use the new drive naming.

Updates tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
1 month ago
Brad Fitzpatrick 2409661a0d control/controlclient: delete old naclbox code, require ts2021 Noise
Updates #11585
Updates tailscale/corp#18882

Change-Id: I90e2e4a211c58d429e2b128604614dde18986442
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick b9611461e5 ipn/ipnlocal: q-encode (RFC 2047) Tailscale serve header values
Updates #11603

RELNOTE=Tailscale serve headers are now RFC 2047 Q-encoded

Change-Id: I1314b65ecf5d39a5a601676346ec2c334fdef042
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Claire Wang 262fa8a01e
ipn/ipnlocal: populate peers' capabilities (#11365)
Populates capabilties field of peers in ipn status.
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
1 month ago
James Tucker 9eaa56df93 tsweb: update doc on BucketedStatsOptions.Finish to match behavior
I originally came to update this to match the documented behavior, but
the code is deliberately avoiding this behavior currently, making it
hard to decide how to update this. For now just align the documentation
to the behavior.

Updates #cleanup

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
Charlotte Brandhorst-Satzkorn 14683371ee
tailscale: update tailfs file and package names (#11590)
This change updates the tailfs file and package names to their new
naming convention.

Updates #tailscale/corp#16827

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
1 month ago
Brad Fitzpatrick 1c259100b0 cmd/{derper,derpprobe}: add --version flag
Fixes #11582

Change-Id: If99fc1ab6b89d624fbb07bd104dd882d2c7b50b4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Patrick O'Doherty 1535d0feca
safeweb: move http.Serve for HTTP redirects into lib (#11592)
Refactor the interaction between caller/library when establishing the
HTTP to HTTPS redirects by moving the call to http.Serve into safeweb.
This makes linting for other uses of http.Serve easier without having to
account for false positives created by the old interface.

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
1 month ago
James Tucker f384742375 net/packet: allow more ICMP errors
We now allow some more ICMP errors to flow, specifically:

- ICMP parameter problem in both IPv4 and IPv6 (corrupt headers)
- ICMP Packet Too Big (for IPv6 PMTU)

Updates #311
Updates #8102
Updates #11002

Signed-off-by: James Tucker <james@tailscale.com>
1 month ago
Irbe Krumina 92ca770b8d
util/linuxfw: fix MSS clamping in nftables mode (#11588)
MSS clamping for nftables was mostly not ran due to to an earlier rule in the FORWARD chain issuing accept verdict.
This commit places the clamping rule into a chain of its own to ensure that it gets ran.

Updates tailscale/tailscale#11002

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Kyle Carberry 27038ee3c2 hostinfo: cache device model to speed up init
This was causing a relatively consistent ~10ms of delay on Linux.

Signed-off-by: Kyle Carberry <kyle@carberry.com>
1 month ago
Brad Fitzpatrick ec87e219ae logtail: delete unused code from old way to configure zstd
Updates #cleanup

Change-Id: I666ecf08ea67e461adf2a3f4daa9d1753b2dc1e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Joe Tsai e2586bc674
logtail: always zstd compress with FastestCompression and LowMemory (#11583)
This is based on empirical testing using actual logs data.

FastestCompression only incurs a marginal <1% compression ratio hit
for a 2.25x reduction in memory use for small payloads
(which are common if log uploads happen at a decently high frequency).
The memory savings for large payloads is much lower
(less than 1.1x reduction).

LowMemory only incurs a marginal <5% hit on performance
for a 1.6-2.0x reduction in memory use for small or large payloads.

The memory gains for both settings justifies the loss of benefits,
which are arguably minimal.

tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
James Tucker 7558a1d594 ipn/ipnlocal: disable sockstats on (unstable) mobile by default
We're tracking down a new instance of memory usage, and excessive memory usage
from sockstats is definitely not going to help with debugging, so disable it by
default on mobile.

Updates tailscale/corp#18514

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
Asutorufa e20ce7bf0c net/dns: close ctx when close dns directManager
Signed-off-by: Asutorufa <16442314+Asutorufa@users.noreply.github.com>
2 months ago
Will Norris 1d2af801fa .github/workflows: remove go-licenses action
This is now handled by an action running in corp.

Updates tailscale/corp#18803

Signed-off-by: Will Norris <will@tailscale.com>
2 months ago
License Updater e80b99cdd1 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2 months ago
Andrew Lytvynov 5aa4cfad06
safeweb: detect mux handler conflicts (#11562)
When both muxes match, and one of them is a wildcard "/" pattern (which
is common in browser muxes), choose the more specific pattern.
If both are non-wildcard matches, there is a pattern overlap, so return
an error.

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Brad Fitzpatrick e7599c1f7e logtail: prevent js/wasm clients from picking TLS client cert
Corp details:
https://github.com/tailscale/corp/issues/18177#issuecomment-2026598715
https://github.com/tailscale/corp/pull/18775#issuecomment-2027505036

Updates tailscale/corp#18177

Change-Id: I7c03a4884540b8519e0996088d085af77991f477
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Irbe Krumina 5fb721d4ad
util/linuxfw,wgengine/router: skip IPv6 firewall configuration in partial iptables mode (#11546)
We have hosts that support IPv6, but not IPv6 firewall configuration
in iptables mode.
We also have hosts that have some support for IPv6 firewall
configuration in iptables mode, but do not have iptables filter table.
We should:
- configure ip rules for all hosts that support IPv6
- only configure firewall rules in iptables mode if the host
has iptables filter table.

Updates tailscale/tailscale#11540

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Patrick O'Doherty af61179c2f
safeweb: add opt-in inline style CSP toggle (#11551)
Allow the use of inline styles with safeweb via an opt-in configuration
item. This will append `style-src "self" "unsafe-inline"` to the default
CSP. The `style-src` directive will be used in lieu of the fallback
`default-src "self"` directive.

Updates tailscale/corp#8027

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2 months ago
Brad Fitzpatrick b0941b79d6 tsweb: make BucketedStats not track 400s, 404s, etc
Updates tailscale/corp#18687

Change-Id: I142ccb1301ec4201c70350799ff03222bce96668
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Brad Fitzpatrick 354cac74a9 tsweb/varz: add charset=utf-8 to varz handler
Some of our labels contain UTF-8 and get mojibaked in the browser
right now.

Updates tailscale/corp#18687

Change-Id: I6069cffd6cc8813df415f06bb308bc2fc3ab65c4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
James Tucker 9401b09028 control/controlclient: move client watchdog to cover initial request
The initial control client request can get stuck in the event that a
connection is established but then lost part way through, without any
ICMP or RST. Ensure that the control client will be restarted by timing
out that initial request as well.

Fixes #11542

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
Irbe Krumina 9b5176c4d9
cmd/k8s-operator: fix failing tests (#11541)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Irbe Krumina 9e2f58f846
cmd/{k8s-nameserver,k8s-operator},k8s-operator: add a kube nameserver, make operator deploy it (#11017)
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.

Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.

The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.

Updates tailscale/tailscale#10499

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition

DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources

Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests

Signed-off-by: Irbe Krumina <irbe@tailscale.com>

---------

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Patrick O'Doherty b60c4664c7
safeweb: return http.Handler from safeweb.RedirectHTTP (#11538)
Updates #cleanup

Change the return type of the safeweb.RedirectHTTP method to a handler
that can be passed directly to http.Serve without any http.HandlerFunc
wrapping necessary.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2 months ago
Brad Fitzpatrick 3e6306a782 derp/derphttp: make CONNECT Host match request-target's authority-form
This CONNECT client doesn't match what Go's net/http.Transport does
(making the two values match).  This makes it match.

This is all pretty unspecified but most clients & doc examples show
these matching. And some proxy implementations (such as Zscaler) care.

Updates tailscale/corp#18716

Change-Id: I135c5facbbcec9276faa772facbde1bb0feb2d26
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Patrick O'Doherty 8f27520633
safeweb: init (#11467)
Updates https://github.com/tailscale/corp/issues/8027

Safeweb is a wrapper around http.Server & tsnet that encodes some
application security defaults.

Safeweb asks developers to split their HTTP routes into two
http.ServeMuxs for serving browser and API-facing endpoints
repsectively. It then wraps these HTTP routes with the
context-appropriate security controls.

safeweb.Server#Serve will serve the HTTP muxes over the provided
listener. Caller are responsible for creating and tearing down their
application's listeners. Applications being served over HTTPS that wish
to implement HTTP redirects can use the Server#HTTPRedirect handler to
do so.

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2 months ago
Andrea Gottardo 008676f76e
cmd/serve: update warning for sandboxed macOS builds (#11530) 2 months ago
Percy Wegmann 66e4d843c1 ipn/localapi: add support for multipart POST to file-put
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.

Updates tailscale/corp#18202

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Percy Wegmann bed818a978 ipn/localapi: add support for multipart POST to file-put
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.

Updates tailscale/corp#18202

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Maisem Ali 0d8cd1645a go.mod: bump github.com/gaissmai/bart
To pick up https://github.com/gaissmai/bart/pull/17.

Updates #deps

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2 months ago
Percy Wegmann eb42a16da9 ipn/ipnlocal: report Taildrive access message on failed responses
For example, if we get a 404 when downloading a file, we'll report access.

Also, to reduce verbosty of logs, this elides 0 length files.

Updates tailscale/corp#17818

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Andrew Lytvynov 5d41259a63
cmd/tailscale/cli: remove Beta tag from tailscale update (#11529)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Charlotte Brandhorst-Satzkorn acb611f034
ipn/localipn: introduce logs for tailfs (#11496)
This change introduces some basic logging into the access and share
pathways for tailfs.

Updates tailscale/corp#17818

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2 months ago
Irbe Krumina 4cbef20569
cmd/k8s-operator: redact auth key from debug logs (#11523)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Brad Fitzpatrick 55baf9474f metrics, tsweb/varz: add multi-label map metrics
Updates tailscale/corp#18640

Change-Id: Ia9ae25956038e9d3266ea165537ac6f02485b74c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Flakes Updater 90a4d6ce69 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2 months ago
Brad Fitzpatrick 6d90966c1f logtail: move a scratch buffer to Logger
Rather than pass around a scratch buffer, put it on the Logger.

This is a baby step towards removing the background uploading
goroutine and starting it as needed.

Updates tailscale/corp#18514 (insofar as it led me to look at this code)

Change-Id: I6fd94581c28bde40fdb9fca788eb9590bcedae1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Irbe Krumina 06e22a96b1
.github/workflows: fix path filter for 'Kubernetes manifests' test job (#11520)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Chris Milson-Tokunaga b6dfd7443a
Change type of installCRDs (#11478)
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.

Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
2 months ago
Percy Wegmann 8b8b315258 net/tstun: use gaissmai/bart instead of tempfork/device
This implementation uses less memory than tempfork/device,
which helps avoid OOM conditions in the iOS VPN extension when
switching to a Tailnet with ExitNode routing enabled.

Updates tailscale/corp#18514

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Andrew Lytvynov 1e7050e73a
go.mod: bump github.com/docker/docker (#11515)
There's a vulnerability https://pkg.go.dev/vuln/GO-2024-2659 that
govulncheck flags, even though it's only reachable from tests and
cmd/sync-containers and cannot be exploited there.

Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Brad Fitzpatrick a36cfb4d3d tailcfg, ipn/ipnlocal, wgengine/magicsock: add only-tcp-443 node attr
Updates tailscale/corp#17879

Change-Id: I0dc305d147b76c409cf729b599a94fa723aef0e0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Brad Fitzpatrick 7b34154df2 all: deprecate Node.Capabilities (more), remove PeerChange.Capabilities [capver 89]
First we had Capabilities []string. Then
https://tailscale.com/blog/acl-grants (#4217) brought CapMap, a
superset of Capabilities. Except we never really finished the
transition inside the codebase to go all-in on CapMap. This does so.

Notably, this coverts Capabilities on the wire early to CapMap
internally so the code can only deal in CapMap, even against an old
control server.

In the process, this removes PeerChange.Capabilities support, which no
known control plane sent anyway. They can and should use
PeerChange.CapMap instead.

Updates #11508
Updates #4217

Change-Id: I872074e226b873f9a578d9603897b831d50b25d9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Brad Fitzpatrick 4992aca6ec tsweb/varz: flesh out munging of expvar keys into valid Prometheus metrics
From a problem we hit with how badger registers expvars; it broke
trunkd's exported metrics.

Updates tailscale/corp#1297

Change-Id: I42e1552e25f734c6f521b6e993d57a82849464b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Brad Fitzpatrick b104688e04 ipn/ipnlocal, types/netmap: replace hasCapability with set lookup on NetworkMap
When node attributes were super rare, the O(n) slice scans looking for
node attributes was more acceptable. But now more code and more users
are using increasingly more node attributes. Time to make it a map.

Noticed while working on tailscale/corp#17879

Updates #cleanup

Change-Id: Ic17c80341f418421002fbceb47490729048756d2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Percy Wegmann 8c88853db6 ipn/ipnlocal: add c2n /debug/pprof/allocs endpoint
This behaves the same as typical debug/pprof/allocs.

Updates tailscale/corp#18514

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Brad Fitzpatrick f45594d2c9 control/controlclient: free memory on iOS before full netmap work
Updates tailscale/corp#18514

Change-Id: I8d0330334b030ed8692b25549a0ee887ac6d7188
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
James Tucker e0f97738ee localapi: reduce garbage production in bus watcher
Updates #optimization

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
James Tucker 3f7313dbdb util/linuxfw,wgengine/router: enable IPv6 configuration when netfilter is disabled
Updates #11434

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
Brad Fitzpatrick 8444937c89 control/controlclient: fix panic regression from earlier load balancer hint header
In the recent 20e9f3369 we made HealthChangeRequest machine requests
include a NodeKey, as it was the oddball machine request that didn't
include one. Unfortunately, that code was sometimes being called (at
least in some of our integration tests) without a node key due to its
registration with health.RegisterWatcher(direct.ReportHealthChange).

Fortunately tests in corp caught this before we cut a release. It's
possible this only affects this particular integration test's
environment, but still worth fixing.

Updates tailscale/corp#1297

Change-Id: I84046779955105763dc1be5121c69fec3c138672
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Joe Tsai 85febda86d
all: use zstdframe where sensible (#11491)
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.

This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.

Updates #cleanup
Updates tailscale/corp#18514

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 months ago
Joe Tsai d4bfe34ba7
util/zstdframe: add package for stateless zstd compression (#11481)
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.

For performance, we pool the relevant Encoder/Decoder
with the specific options set.

Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.

This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.

Performance:

	BenchmarkEncode/Best               1690        610926 ns/op      25.78 MB/s           1 B/op          0 allocs/op
	    zstd_test.go:137: memory: 50.336 MiB
	    zstd_test.go:138: ratio:  3.269x
	BenchmarkEncode/Better            10000        100939 ns/op     156.04 MB/s           0 B/op          0 allocs/op
	    zstd_test.go:137: memory: 20.399 MiB
	    zstd_test.go:138: ratio:  3.131x
	BenchmarkEncode/Default            15775         74976 ns/op     210.08 MB/s         105 B/op          0 allocs/op
	    zstd_test.go:137: memory: 1.586 MiB
	    zstd_test.go:138: ratio:  3.064x
	BenchmarkEncode/Fastest            23222         53977 ns/op     291.81 MB/s          26 B/op          0 allocs/op
	    zstd_test.go:137: memory: 599.458 KiB
	    zstd_test.go:138: ratio:  2.898x
	BenchmarkEncode/FastestLowMemory                   23361         50789 ns/op     310.13 MB/s          15 B/op          0 allocs/op
	    zstd_test.go:137: memory: 334.458 KiB
	    zstd_test.go:138: ratio:  2.898x
	BenchmarkEncode/FastestNoChecksum                  23086         50253 ns/op     313.44 MB/s          26 B/op          0 allocs/op
	    zstd_test.go:137: memory: 599.458 KiB
	    zstd_test.go:138: ratio:  2.900x

	BenchmarkDecode/Checksum                           70794         17082 ns/op     300.96 MB/s           4 B/op          0 allocs/op
	    zstd_test.go:163: memory: 316.438 KiB
	BenchmarkDecode/NoChecksum                         74935         15990 ns/op     321.51 MB/s           4 B/op          0 allocs/op
	    zstd_test.go:163: memory: 316.438 KiB
	BenchmarkDecode/LowMemory                          71043         16739 ns/op     307.13 MB/s           0 B/op          0 allocs/op
	    zstd_test.go:163: memory: 79.347 KiB

We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.

Additional performance:

	BenchmarkEncodeParallel/zstd-24                     1857        619264 ns/op        1796 B/op         49 allocs/op
	BenchmarkEncodeParallel/zstdframe-24                1954        532023 ns/op        4293 B/op         49 allocs/op
	BenchmarkDecodeParallel/zstd-24                     5288        197281 ns/op        2516 B/op         49 allocs/op
	BenchmarkDecodeParallel/zstdframe-24                6441        196254 ns/op        2513 B/op         49 allocs/op

In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.

Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 months ago
Brad Fitzpatrick 6a860cfb35 ipn/ipnlocal: add c2n pprof option to force a GC
Like net/http/pprof has.

Updates tailscale/corp#18514

Change-Id: I264adb6dcf5732d19707783b29b7273b4ca69cf4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Brad Fitzpatrick 5d1c72f76b wgengine/magicsock: don't use endpoint debug ringbuffer on mobile.
Save some memory.

Updates tailscale/corp#18514

Change-Id: Ibcaf3c6d8e5cc275c81f04141d0f176e2249509b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Andrew Dunham 512fc0b502 util/reload: add new package to handle periodic value loading
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.

Updates tailscale/corp#1297

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f
2 months ago
Adrian Dewhurst 2f7e7be2ea control/controlclient: do not alias peer CapMap
Updates #cleanup

Change-Id: I10fd5e04310cdd7894a3caa3045b86eb0a06b6a0
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2 months ago
Percy Wegmann 067ed0bf6f ipnlocal: ensure TailFS share notifications are non-nil
This allows the UI to distinguish between 'no shares' versus
'not being notified about shares'.

Updates ENG-2843

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
Brad Fitzpatrick 20e9f3369d control/controlclient: send load balancing hint HTTP request header
Updates tailscale/corp#1297

Change-Id: I0b102081e81dfc1261f4b05521ab248a2e4a1298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Percy Wegmann 15c58cb77c tailfs: include whitespace in test share and filenames
Since TailFS allows spaces in folder and file names, test with spaces.

Updates tailscale/corp#16827

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2 months ago
James Tucker e37eded256
tool/gocross: add android autoflags (#11465)
Updates tailscale/corp#18202

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
Claire Wang 221de01745
control/controlclient: fix sending peer capmap changes (#11457)
Instead of just checking if a peer capmap is nil, compare the previous
state peer capmap with the new peer capmap.
Updates tailscale/corp#17516

Signed-off-by: Claire Wang <claire@tailscale.com>
2 months ago
Andrew Dunham 6da1dc84de wgengine: fix logger data race in tests
Observed in:
    https://github.com/tailscale/tailscale/actions/runs/8350904950/job/22858266932?pr=11463

Updates #11226

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9b57db4b34b6ad91d240cd9fa7e344fc0376d52d
2 months ago
Andrew Dunham e382e4cee6 syncs: add Swap method
To mimic sync.Map.Swap, sync/atomic.Value.Swap, etc.

Updates tailscale/corp#1297

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: If7627da1bce8b552873b21d7e5ebb98904e9a650
2 months ago
Andrea Gottardo 6288c9b41e
version/prop: remove IsMacAppSandboxEnabled (#11461)
Fixes tailscale/corp#18441

For a few days, IsMacAppStore() has been returning `false` on App Store builds (IPN-macOS target in Xcode).

I regressed this in #11369 by introducing logic to detect the sandbox by checking for the APP_SANDBOX_CONTAINER_ID environment variable. I thought that was a more robust approach instead of checking the name of the executable. However, it appears that on recent macOS versions this environment variable is no longer getting set, so we should go back to the previous logic that checks for the executable path, or HOME containing references to macsys.

This PR also adds additional checks to the logic by also checking XPC_SERVICE_NAME in addition to HOME where possible. That environment variable is set inside the network extension, either macos or macsys and is good to look at if for any reason HOME is not set.
2 months ago
Mario Minardi 68d9e49a5b
api.md: add missing backtick to GET searchpaths doc (#11459)
Add missing backtick to GET searchpaths api documentation.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 months ago
Will Norris 349799a1ba api.md: format API docs with prettier
Mostly inconsequential minor fixes for consistency.  A couple of changes
to actual JSON examples, but all still very readable, so I think it's
fine.

Updates #cleanup

Signed-off-by: Will Norris <will@tailscale.com>
2 months ago
Irbe Krumina b0c3e6f6c5
cmd/k8s-operator,ipn/conf.go: fix --accept-routes for proxies (#11453)
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
James Tucker 7fe4cbbaf3
types/views: optimize slices contains under some conditions (#11449)
In control there are conditions where the leaf functions are not being
optimized away (i.e. At is not inlined), resulting in undesirable time
spent copying during SliceContains. This optimization is likely
irrelevant to simpler code or smaller structures.

Updates #optimization

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago
Mario Minardi d2ccfa4edd
cmd/tailscale,ipn/ipnlocal: enable web client over quad 100 by default (#11419)
Enable the web client over 100.100.100.100 by default. Accepting traffic
from [tailnet IP]:5252 still requires setting the `webclient` user pref.

Updates https://github.com/tailscale/tailscale/issues/10261

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 months ago
Will Norris 4d747c1833 api.md: document device expiration endpoint
This was originally built for testing node expiration flows, but is also
useful for customers to force device re-auth without actually deleting
the device from the tailnet.

Updates tailscale/corp#18408

Signed-off-by: Will Norris <will@tailscale.com>
2 months ago
Mario Minardi e0886ad167
ipn/ipnlocal, tailcfg: add disable-web-client node attribute (#11418)
Add a disable-web-client node attribute and add handling for disabling
the web client when this node attribute is set.

Updates https://github.com/tailscale/tailscale/issues/10261

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 months ago
Marwan Sulaiman da7c3d1753 envknob: ensure f is not nil before using it
This PR fixes a panic that I saw in the mac app where
parsing the env file fails but we don't get to see the
error due to the panic of using f.Name()

Fixes #11425

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2 months ago
Andrea Gottardo 08ebac9acb
version,cli,safesocket: detect non-sandboxed macOS GUI (#11369)
Updates ENG-2848

We can safely disable the App Sandbox for our macsys GUI, allowing us to use `tailscale ssh` and do a few other things that we've wanted to do for a while. This PR:

- allows Tailscale SSH to be used from the macsys GUI binary when called from a CLI
- tweaks the detection of client variants in prop.go, with new functions `IsMacSys()`, `IsMacSysApp()` and `IsMacAppSandboxEnabled()`

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2 months ago
Irbe Krumina ea55f96310
cmd/tailscale/cli: fix configuring partially empty kubeconfig (#11417)
When a user deletes the last cluster/user/context from their
kubeconfig via 'kubectl delete-[cluster|user|context] command,
kubectx sets the relevant field in kubeconfig to 'null'.
This was breaking our conversion logic that was assuming that the field
is either non-existant or is an array.

Updates tailscale/corp#18320

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Anton Tolchanov cf8948da5f net/routetable: increase route limit used by the test
I was running all tests while preparing a recent stable release, and
this was failing because my computer is connected to a fairly large
tailnet.

```
--- FAIL: TestGetRouteTable (0.01s)
    routetable_linux_test.go:32: expected at least one default route;
    ...
```

```
$ ip route show table 52  | wc -l
1051
```

Updates #cleanup

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 months ago
Andrew Lytvynov decd9893e4
ipn/ipnlocal: validate domain of PopBrowserURL on default control URL (#11394)
If the client uses the default Tailscale control URL, validate that all
PopBrowserURLs are under tailscale.com or *.tailscale.com. This reduces
the risk of a compromised control plane opening phishing pages for
example.

The client trusts control for many other things, but this is one easy
way to reduce that trust a bit.

Fixes #11393

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Andrew Lytvynov 48eef9e6eb
clientupdate: do not allow msiexec to reboot the OS (#11409)
According to
https://learn.microsoft.com/en-us/windows/win32/msi/standard-installer-command-line-options#promptrestart,
`/promptrestart` is ignored with `/quiet` is set, so msiexec.exe can
sometimes silently trigger a reboot. The best we can do to reduce
unexpected disruption is to just prevent restarts, until the user
chooses to do it. Restarts aren't normally needed for Tailscale updates,
but there seem to be some situations where it's triggered.

Updates #18254

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Anton Tolchanov da3cf12194 VERSION.txt: this is v1.63.0
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 months ago

@ -1,64 +0,0 @@
name: go-licenses
on:
# run action when a change lands in the main branch which updates go.mod or
# our license template file. Also allow manual triggering.
push:
branches:
- main
paths:
- go.mod
- .github/licenses.tmpl
- .github/workflows/go-licenses.yml
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
update-licenses:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: go.mod
- name: Install go-licenses
run: |
go install github.com/google/go-licenses@v1.2.2-0.20220825154955-5eedde1c6584
- name: Run go-licenses
env:
# include all build tags to include platform-specific dependencies
GOFLAGS: "-tags=android,cgo,darwin,freebsd,ios,js,linux,openbsd,wasm,windows"
run: |
[ -d licenses ] || mkdir licenses
go-licenses report tailscale.com/cmd/tailscale tailscale.com/cmd/tailscaled > licenses/tailscale.md --template .github/licenses.tmpl
- name: Get access token
uses: tibdex/github-app-token@b62528385c34dbc9f38e5f4225ac829252d1ea92 # v1.8.0
id: generate-token
with:
app_id: ${{ secrets.LICENSING_APP_ID }}
installation_id: ${{ secrets.LICENSING_APP_INSTALLATION_ID }}
private_key: ${{ secrets.LICENSING_APP_PRIVATE_KEY }}
- name: Send pull request
uses: peter-evans/create-pull-request@284f54f989303d2699d373481a0cfa13ad5a6666 #v5.0.1
with:
token: ${{ steps.generate-token.outputs.token }}
author: License Updater <noreply+license-updater@tailscale.com>
committer: License Updater <noreply+license-updater@tailscale.com>
branch: licenses/cli
commit-message: "licenses: update tailscale{,d} licenses"
title: "licenses: update tailscale{,d} licenses"
body: Triggered by ${{ github.repository }}@${{ github.sha }}
signoff: true
delete-branch: true
team-reviewers: opensource-license-reviewers

@ -32,7 +32,6 @@ jobs:
- "ubuntu:18.04"
- "ubuntu:20.04"
- "ubuntu:22.04"
- "ubuntu:22.10"
- "ubuntu:23.04"
- "elementary/docker:stable"
- "elementary/docker:unstable"
@ -91,7 +90,10 @@ jobs:
|| contains(matrix.image, 'parrotsec')
|| contains(matrix.image, 'kalilinux')
- name: checkout
uses: actions/checkout@v4
# We cannot use v4, as it requires a newer glibc version than some of the
# tested images provide. See
# https://github.com/actions/checkout/issues/1487
uses: actions/checkout@v3
- name: run installer
run: scripts/installer.sh
# Package installation can fail in docker because systemd is not running

@ -2,8 +2,8 @@ name: "Kubernetes manifests"
on:
pull_request:
paths:
- './cmd/k8s-operator/**'
- './k8s-operator/**'
- 'cmd/k8s-operator/**'
- 'k8s-operator/**'
- '.github/workflows/kubemanifests.yaml'
# Cancel workflow run if there is a newer push to the same PR for which it is

@ -254,9 +254,6 @@ jobs:
goarch: amd64
- goos: openbsd
goarch: amd64
# Plan9 (disabled until 3p dependencies are fixed)
# - goos: plan9
# goarch: amd64
runs-on: ubuntu-22.04
steps:
@ -305,6 +302,47 @@ jobs:
GOOS: ios
GOARCH: arm64
crossmin: # cross-compile for platforms where we only check cmd/tailscale{,d}
strategy:
fail-fast: false # don't abort the entire matrix if one element fails
matrix:
include:
# Plan9
- goos: plan9
goarch: amd64
# AIX
- goos: aix
goarch: ppc64
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@v4
- name: Restore Cache
uses: actions/cache@v3
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: build core
run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
GOARM: ${{ matrix.goarm }}
CGO_ENABLED: "0"
android:
# similar to cross above, but android fails to build a few pieces of the
# repo. We should fix those pieces, they're small, but as a stepping stone,
@ -318,7 +356,7 @@ jobs:
# some Android breakages early.
# TODO(bradfitz): better; see https://github.com/tailscale/tailscale/issues/4482
- name: build some
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/interfaces ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/netmon ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
env:
GOOS: android
GOARCH: arm64

1
.gitignore vendored

@ -9,6 +9,7 @@
cmd/tailscale/tailscale
cmd/tailscaled/tailscaled
ssh/tailssh/testcontainers/tailscaled
# Test binary, built with `go test -c`
*.test

@ -1,5 +1,5 @@
IMAGE_REPO ?= tailscale/tailscale
SYNO_ARCH ?= "amd64"
SYNO_ARCH ?= "x86_64"
SYNO_DSM ?= "7"
TAGS ?= "latest"
@ -100,6 +100,26 @@ publishdevoperator: ## Build and publish k8s-operator image to location specifie
@test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=operator ./build_docker.sh
publishdevnameserver: ## Build and publish k8s-nameserver image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-nameserver" || (echo "REPO=... must not be tailscale/k8s-nameserver" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-nameserver" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-nameserver" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-nameserver ./build_docker.sh
.PHONY: sshintegrationtest
sshintegrationtest: ## Run the SSH integration tests in various Docker containers
@GOOS=linux GOARCH=amd64 go test -tags integrationtest -c ./ssh/tailssh -o ssh/tailssh/testcontainers/tailssh.test && \
GOOS=linux GOARCH=amd64 go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \
echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:mantic" && docker build --build-arg="BASE=ubuntu:mantic" -t ssh-ubuntu-mantic ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \
echo "Testing on fedora:38" && docker build --build-arg="BASE=dokken/fedora-38" -t ssh-fedora-38 ssh/tailssh/testcontainers && \
echo "Testing on fedora:39" && docker build --build-arg="BASE=dokken/fedora-39" -t ssh-fedora-39 ssh/tailssh/testcontainers && \
echo "Testing on fedora:40" && docker build --build-arg="BASE=dokken/fedora-40" -t ssh-fedora-40 ssh/tailssh/testcontainers
help: ## Show this help
@echo "\nSpecify a command. The choices are:\n"
@grep -hE '^[0-9a-zA-Z_-]+:.*?## .*$$' ${MAKEFILE_LIST} | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[0;36m%-20s\033[m %s\n", $$1, $$2}'

@ -1 +1 @@
1.61.0
1.67.0

1298
api.md

File diff suppressed because it is too large Load Diff

@ -23,6 +23,7 @@ import (
"tailscale.com/util/dnsname"
"tailscale.com/util/execqueue"
"tailscale.com/util/mak"
"tailscale.com/util/slicesx"
)
// RouteAdvertiser is an interface that allows the AppConnector to advertise
@ -36,6 +37,19 @@ type RouteAdvertiser interface {
UnadvertiseRoute(...netip.Prefix) error
}
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs
// its function as a subsystem inside of a tailscale node. At the control plane
// side App Connector routing is configured in terms of domains rather than IP
@ -49,6 +63,9 @@ type AppConnector struct {
logf logger.Logf
routeAdvertiser RouteAdvertiser
// storeRoutesFunc will be called to persist routes if it is not nil.
storeRoutesFunc func(*RouteInfo) error
// mu guards the fields that follow
mu sync.Mutex
@ -67,11 +84,46 @@ type AppConnector struct {
}
// NewAppConnector creates a new AppConnector.
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser) *AppConnector {
return &AppConnector{
func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
ac := &AppConnector{
logf: logger.WithPrefix(logf, "appc: "),
routeAdvertiser: routeAdvertiser,
storeRoutesFunc: storeRoutesFunc,
}
if routeInfo != nil {
ac.domains = routeInfo.Domains
ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = routeInfo.Control
}
return ac
}
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() error {
if !e.ShouldStoreRoutes() {
return nil
}
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
}
// ClearRoutes removes all route state from the AppConnector.
func (e *AppConnector) ClearRoutes() error {
e.mu.Lock()
defer e.mu.Unlock()
e.controlRoutes = nil
e.domains = nil
e.wildcards = nil
return e.storeRoutesLocked()
}
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration
@ -125,10 +177,26 @@ func (e *AppConnector) updateDomains(domains []string) {
for _, wc := range e.wildcards {
if dnsname.HasSuffix(d, wc) {
e.domains[d] = addrs
delete(oldDomains, d)
break
}
}
}
// Everything left in oldDomains is a domain we're no longer tracking
// and if we are storing route info we can unadvertise the routes
if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{}
for _, addrs := range oldDomains {
for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
}
@ -152,6 +220,14 @@ func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
var toRemove []netip.Prefix
// If we're storing routes and know e.controlRoutes is a good
// representation of what should be in AdvertisedRoutes we can stop
// advertising routes that used to be in e.controlRoutes but are not
// in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes)
}
nextRoute:
for _, r := range routes {
for _, addr := range e.domains {
@ -170,6 +246,9 @@ nextRoute:
}
e.controlRoutes = routes
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}
// Domains returns the currently configured domain list.
@ -380,6 +459,9 @@ func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Pref
e.logf("[v2] advertised route for %v: %v", domain, addr)
}
}
if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
})
}
@ -400,3 +482,15 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
func compareAddr(l, r netip.Addr) int {
return l.Compare(r)
}
// routesWithout returns a without b where a and b
// are unsorted slices of netip.Prefix
func routesWithout(a, b []netip.Prefix) []netip.Prefix {
m := make(map[netip.Prefix]bool, len(b))
for _, p := range b {
m[p] = true
}
return slicesx.Filter(make([]netip.Prefix, 0, len(a)), a, func(p netip.Prefix) bool {
return !m[p]
})
}

@ -17,194 +17,238 @@ import (
"tailscale.com/util/must"
)
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) {
ctx := context.Background()
a := NewAppConnector(t.Logf, nil)
a.UpdateDomains([]string{"example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
addr := netip.MustParseAddr("192.0.0.8")
a.domains["example.com"] = append(a.domains["example.com"], addr)
a.UpdateDomains([]string{"example.com"})
a.Wait(ctx)
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
if got, want := a.domains["example.com"], []netip.Addr{addr}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
}
}
func TestUpdateRoutes(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"*.example.com"})
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com"})
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
// This route should be collapsed into the range
a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
t.Fatalf("got %v, want %v", rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
}
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
// This route should not be collapsed or removed
a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes()))
slices.SortFunc(routes, prefixCompare)
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the non-matching /32 is preserved, even though it's in the domains table.
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Errorf("added routes: got %v, want %v", rc.Routes(), routes)
}
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
// Ensure that the contained /32 is removed, replaced by the /24.
wantRemoved := []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
}
}
}
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes)
}
}
}
func TestDomainRoutes(t *testing.T) {
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
for _, shouldStore := range []bool{false, true} {
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")},
}
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want)
}
}
}
func TestObserveDNSResponse(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
// a has no domains configured, so it should not advertise any routes
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// a CNAME record chain should result in a route being added if the chain
// matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"})
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain
a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
// don't re-advertise routes that have already been advertised
a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
// don't advertise addresses that are already in a control provided route
pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx)
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
}
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
}
}
}
func TestWildcardDomains(t *testing.T) {
ctx := context.Background()
rc := &appctest.RouteCollector{}
a := NewAppConnector(t.Logf, rc)
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
a.updateDomains([]string{"*.example.com"})
a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want)
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
a.updateDomains([]string{"*.example.com", "example.com"})
if _, ok := a.domains["foo.example.com"]; !ok {
t.Errorf("expected foo.example.com to be preserved in domains due to wildcard")
}
if got, want := a.wildcards, []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("wildcards: got %v; want %v", got, want)
}
// There was an early regression where the wildcard domain was added repeatedly, this guards against that.
a.updateDomains([]string{"*.example.com", "example.com"})
if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
}
}
}
@ -310,3 +354,169 @@ func prefixCompare(a, b netip.Prefix) int {
}
return a.Addr().Compare(b.Addr())
}
func prefixes(in ...string) []netip.Prefix {
toRet := make([]netip.Prefix, len(in))
for i, s := range in {
toRet[i] = netip.MustParsePrefix(s)
}
return toRet
}
func TestUpdateRouteRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
// nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.2/32"))
a.Wait(ctx)
// the routes passed to UpdateDomainsAndRoutes have been advertised
assertRoutes("simple update", prefixes("1.2.3.1/32", "1.2.3.2/32"), []netip.Prefix{})
// one route the same, one different
a.UpdateDomainsAndRoutes([]string{}, prefixes("1.2.3.1/32", "1.2.3.3/32"))
a.Wait(ctx)
// old behavior: routes are not removed, resulting routes are both old and new
// (we have dupe 1.2.3.1 routes because the test RouteAdvertiser doesn't have the deduplication
// the real one does)
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior: routes are removed, resulting routes are new only
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.1/32", "1.2.3.3/32")
wantRemovedRoutes = prefixes("1.2.3.2/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateDomainRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestUpdateWildcardRouteRemoval(t *testing.T) {
for _, shouldStore := range []bool{false, true} {
ctx := context.Background()
rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
if !slices.Equal(routes, rc.Routes()) {
t.Fatalf("%s: (shouldStore=%t) routes want %v, got %v", prefix, shouldStore, routes, rc.Routes())
}
if !slices.Equal(removedRoutes, rc.RemovedRoutes()) {
t.Fatalf("%s: (shouldStore=%t) removedRoutes want %v, got %v", prefix, shouldStore, removedRoutes, rc.RemovedRoutes())
}
}
var a *AppConnector
if shouldStore {
a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
} else {
a = NewAppConnector(t.Logf, rc, nil, nil)
}
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
a.Wait(ctx)
// observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com"}, []netip.Prefix{})
a.Wait(ctx)
// old behavior, routes are not removed
wantRoutes := prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32")
wantRemovedRoutes := []netip.Prefix{}
if shouldStore {
// new behavior, routes are removed for *.b.example.com
wantRoutes = prefixes("1.2.3.1/32", "1.2.3.2/32")
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
}
assertRoutes("removal", wantRoutes, wantRemovedRoutes)
}
}
func TestRoutesWithout(t *testing.T) {
assert := func(msg string, got, want []netip.Prefix) {
if !slices.Equal(want, got) {
t.Errorf("%s: want %v, got %v", msg, want, got)
}
}
assert("empty routes", routesWithout([]netip.Prefix{}, []netip.Prefix{}), []netip.Prefix{})
assert("a empty", routesWithout([]netip.Prefix{}, prefixes("1.1.1.1/32", "1.1.1.2/32")), []netip.Prefix{})
assert("b empty", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), []netip.Prefix{}), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("no overlap", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.3/32", "1.1.1.4/32")), prefixes("1.1.1.1/32", "1.1.1.2/32"))
assert("a has fewer", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32"), prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32")), []netip.Prefix{})
assert("a has more", routesWithout(prefixes("1.1.1.1/32", "1.1.1.2/32", "1.1.1.3/32", "1.1.1.4/32"), prefixes("1.1.1.1/32", "1.1.1.3/32")), prefixes("1.1.1.2/32", "1.1.1.4/32"))
}

@ -37,7 +37,7 @@ while [ "$#" -gt 1 ]; do
--extra-small)
shift
ldflags="$ldflags -w -s"
tags="${tags:+$tags,}ts_omit_aws,ts_omit_bird,ts_omit_tap,ts_omit_kube"
tags="${tags:+$tags,}ts_omit_aws,ts_omit_bird,ts_omit_tap,ts_omit_kube,ts_omit_completion"
;;
--box)
shift

@ -49,6 +49,7 @@ case "$TARGET" in
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_kube" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
@ -70,6 +71,22 @@ case "$TARGET" in
--target="${PLATFORM}" \
/usr/local/bin/operator
;;
k8s-nameserver)
DEFAULT_REPOS="tailscale/k8s-nameserver"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
--gopaths="tailscale.com/cmd/k8s-nameserver:/usr/local/bin/k8s-nameserver" \
--ldflags=" \
-X tailscale.com/version.longStamp=${VERSION_LONG} \
-X tailscale.com/version.shortStamp=${VERSION_SHORT} \
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
/usr/local/bin/k8s-nameserver
;;
*)
echo "unknown target: $TARGET"
exit 1

@ -49,3 +49,11 @@ type ReloadConfigResponse struct {
Reloaded bool // whether the config was reloaded
Err string // any error message
}
// ExitNodeSuggestionResponse is the response to a LocalAPI suggest-exit-node GET request.
// It returns the StableNodeID, name, and location of a suggested exit node for the client making the request.
type ExitNodeSuggestionResponse struct {
ID tailcfg.StableNodeID
Name string
Location tailcfg.LocationView `json:",omitempty"`
}

@ -28,6 +28,7 @@ import (
"go4.org/mem"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/drive"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@ -35,7 +36,6 @@ import (
"tailscale.com/paths"
"tailscale.com/safesocket"
"tailscale.com/tailcfg"
"tailscale.com/tailfs"
"tailscale.com/tka"
"tailscale.com/types/key"
"tailscale.com/types/tkatype"
@ -1418,53 +1418,62 @@ func (lc *LocalClient) CheckUpdate(ctx context.Context) (*tailcfg.ClientVersion,
return &cv, nil
}
// TailFSSetFileServerAddr instructs TailFS to use the server at addr to access
// SetUseExitNode toggles the use of an exit node on or off.
// To turn it on, there must have been a previously used exit node.
// The most previously used one is reused.
// This is a convenience method for GUIs. To select an actual one, update the prefs.
func (lc *LocalClient) SetUseExitNode(ctx context.Context, on bool) error {
_, err := lc.send(ctx, "POST", "/localapi/v0/set-use-exit-node-enabled?enabled="+strconv.FormatBool(on), http.StatusOK, nil)
return err
}
// DriveSetServerAddr instructs Taildrive to use the server at addr to access
// the filesystem. This is used on platforms like Windows and MacOS to let
// TailFS know to use the file server running in the GUI app.
func (lc *LocalClient) TailFSSetFileServerAddr(ctx context.Context, addr string) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/tailfs/fileserver-address", http.StatusCreated, strings.NewReader(addr))
// Taildrive know to use the file server running in the GUI app.
func (lc *LocalClient) DriveSetServerAddr(ctx context.Context, addr string) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/drive/fileserver-address", http.StatusCreated, strings.NewReader(addr))
return err
}
// TailFSShareSet adds or updates the given share in the list of shares that
// TailFS will serve to remote nodes. If a share with the same name already
// DriveShareSet adds or updates the given share in the list of shares that
// Taildrive will serve to remote nodes. If a share with the same name already
// exists, the existing share is replaced/updated.
func (lc *LocalClient) TailFSShareSet(ctx context.Context, share *tailfs.Share) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/tailfs/shares", http.StatusCreated, jsonBody(share))
func (lc *LocalClient) DriveShareSet(ctx context.Context, share *drive.Share) error {
_, err := lc.send(ctx, "PUT", "/localapi/v0/drive/shares", http.StatusCreated, jsonBody(share))
return err
}
// TailFSShareRemove removes the share with the given name from the list of
// shares that TailFS will serve to remote nodes.
func (lc *LocalClient) TailFSShareRemove(ctx context.Context, name string) error {
// DriveShareRemove removes the share with the given name from the list of
// shares that Taildrive will serve to remote nodes.
func (lc *LocalClient) DriveShareRemove(ctx context.Context, name string) error {
_, err := lc.send(
ctx,
"DELETE",
"/localapi/v0/tailfs/shares",
"/localapi/v0/drive/shares",
http.StatusNoContent,
strings.NewReader(name))
return err
}
// TailFSShareRename renames the share from old to new name.
func (lc *LocalClient) TailFSShareRename(ctx context.Context, oldName, newName string) error {
// DriveShareRename renames the share from old to new name.
func (lc *LocalClient) DriveShareRename(ctx context.Context, oldName, newName string) error {
_, err := lc.send(
ctx,
"POST",
"/localapi/v0/tailfs/shares",
"/localapi/v0/drive/shares",
http.StatusNoContent,
jsonBody([2]string{oldName, newName}))
return err
}
// TailFSShareList returns the list of shares that TailFS is currently serving
// DriveShareList returns the list of shares that drive is currently serving
// to remote nodes.
func (lc *LocalClient) TailFSShareList(ctx context.Context) ([]*tailfs.Share, error) {
result, err := lc.get200(ctx, "/localapi/v0/tailfs/shares")
func (lc *LocalClient) DriveShareList(ctx context.Context) ([]*drive.Share, error) {
result, err := lc.get200(ctx, "/localapi/v0/drive/shares")
if err != nil {
return nil, err
}
var shares []*tailfs.Share
var shares []*drive.Share
err = json.Unmarshal(result, &shares)
return shares, err
}
@ -1505,3 +1514,12 @@ func (w *IPNBusWatcher) Next() (ipn.Notify, error) {
}
return n, nil
}
// SuggestExitNode requests an exit node suggestion and returns the exit node's details.
func (lc *LocalClient) SuggestExitNode(ctx context.Context) (apitype.ExitNodeSuggestionResponse, error) {
body, err := lc.get200(ctx, "/localapi/v0/suggest-exit-node")
if err != nil {
return apitype.ExitNodeSuggestionResponse{}, err
}
return decodeJSON[apitype.ExitNodeSuggestionResponse](body)
}

@ -34,9 +34,9 @@ func TestDeps(t *testing.T) {
deptest.DepChecker{
BadDeps: map[string]string{
// Make sure we don't again accidentally bring in a dependency on
// TailFS or its transitive dependencies
"tailscale.com/tailfs/tailfsimpl": "https://github.com/tailscale/tailscale/pull/10631",
"github.com/studio-b12/gowebdav": "https://github.com/tailscale/tailscale/pull/10631",
// drive or its transitive dependencies
"tailscale.com/drive/driveimpl": "https://github.com/tailscale/tailscale/pull/10631",
"github.com/studio-b12/gowebdav": "https://github.com/tailscale/tailscale/pull/10631",
},
}.Check(t)
}

@ -223,7 +223,7 @@ func (s *Server) awaitUserAuth(ctx context.Context, session *browserSession) err
func (s *Server) newSessionID() (string, error) {
raw := make([]byte, 16)
for i := 0; i < 5; i++ {
for range 5 {
if _, err := rand.Read(raw); err != nil {
return "", err
}

@ -34,7 +34,7 @@
"prettier-plugin-organize-imports": "^3.2.2",
"tailwindcss": "^3.3.3",
"typescript": "^5.3.3",
"vite": "^5.1.4",
"vite": "^5.1.7",
"vite-plugin-svgr": "^4.2.0",
"vite-tsconfig-paths": "^3.5.0",
"vitest": "^1.3.1"

@ -1150,7 +1150,15 @@ func (s *Server) tailscaleUp(ctx context.Context, st *ipnstate.Status, opt tails
if !isRunning {
ipnOptions := ipn.Options{AuthKey: opt.AuthKey}
if opt.ControlURL != "" {
ipnOptions.UpdatePrefs = &ipn.Prefs{ControlURL: opt.ControlURL}
_, err := s.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
ControlURL: opt.ControlURL,
},
ControlURLSet: true,
})
if err != nil {
s.logf("edit prefs: %v", err)
}
}
if err := s.lc.Start(ctx, ipnOptions); err != nil {
s.logf("start: %v", err)

@ -20,7 +20,7 @@
"@jridgewell/gen-mapping" "^0.3.0"
"@jridgewell/trace-mapping" "^0.3.9"
"@babel/code-frame@^7.0.0", "@babel/code-frame@^7.22.10", "@babel/code-frame@^7.22.13", "@babel/code-frame@^7.22.5", "@babel/code-frame@^7.23.4":
"@babel/code-frame@^7.0.0", "@babel/code-frame@^7.22.13", "@babel/code-frame@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/code-frame/-/code-frame-7.23.4.tgz#03ae5af150be94392cb5c7ccd97db5a19a5da6aa"
integrity sha512-r1IONyb6Ia+jYR2vvIDhdWdlTGhqbBoFqLTQidzZ4kepUFH15ejXvFHxCVbtl7BOXIudsIubf4E81xeA3h3IXA==
@ -63,7 +63,7 @@
eslint-visitor-keys "^2.1.0"
semver "^6.3.1"
"@babel/generator@^7.22.10", "@babel/generator@^7.23.0", "@babel/generator@^7.23.3", "@babel/generator@^7.23.4":
"@babel/generator@^7.23.3", "@babel/generator@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/generator/-/generator-7.23.4.tgz#4a41377d8566ec18f807f42962a7f3551de83d1c"
integrity sha512-esuS49Cga3HcThFNebGhlgsrVLkvhqvYDTzgjfFFlHJcIfLe5jFmRRfCQ1KuBfc4Jrtn3ndLgKWAKjBE+IraYQ==
@ -87,7 +87,7 @@
dependencies:
"@babel/types" "^7.22.15"
"@babel/helper-compilation-targets@^7.22.10", "@babel/helper-compilation-targets@^7.22.15", "@babel/helper-compilation-targets@^7.22.6":
"@babel/helper-compilation-targets@^7.22.15", "@babel/helper-compilation-targets@^7.22.6":
version "7.22.15"
resolved "https://registry.yarnpkg.com/@babel/helper-compilation-targets/-/helper-compilation-targets-7.22.15.tgz#0698fc44551a26cf29f18d4662d5bf545a6cfc52"
integrity sha512-y6EEzULok0Qvz8yyLkCvVX+02ic+By2UdOhylwUOvOn9dvYc9mKICJuuU1n1XBI02YWsNsnrY1kc6DVbjcXbtw==
@ -160,14 +160,14 @@
dependencies:
"@babel/types" "^7.23.0"
"@babel/helper-module-imports@^7.22.15", "@babel/helper-module-imports@^7.22.5":
"@babel/helper-module-imports@^7.22.15":
version "7.22.15"
resolved "https://registry.yarnpkg.com/@babel/helper-module-imports/-/helper-module-imports-7.22.15.tgz#16146307acdc40cc00c3b2c647713076464bdbf0"
integrity sha512-0pYVBnDKZO2fnSPCrgM/6WMc7eS20Fbok+0r88fp+YtWVLZrp4CkafFGIp+W0VKw4a22sgebPT99y+FDNMdP4w==
dependencies:
"@babel/types" "^7.22.15"
"@babel/helper-module-transforms@^7.22.9", "@babel/helper-module-transforms@^7.23.3":
"@babel/helper-module-transforms@^7.23.3":
version "7.23.3"
resolved "https://registry.yarnpkg.com/@babel/helper-module-transforms/-/helper-module-transforms-7.23.3.tgz#d7d12c3c5d30af5b3c0fcab2a6d5217773e2d0f1"
integrity sha512-7bBs4ED9OmswdfDzpz4MpWgSrV7FXlc3zIagvLFjS5H+Mk7Snr21vQ6QwrsoCGMfNC4e4LQPdoULEt4ykz0SRQ==
@ -229,17 +229,17 @@
dependencies:
"@babel/types" "^7.22.5"
"@babel/helper-string-parser@^7.22.5", "@babel/helper-string-parser@^7.23.4":
"@babel/helper-string-parser@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz#9478c707febcbbe1ddb38a3d91a2e054ae622d83"
integrity sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==
"@babel/helper-validator-identifier@^7.22.20", "@babel/helper-validator-identifier@^7.22.5":
"@babel/helper-validator-identifier@^7.22.20":
version "7.22.20"
resolved "https://registry.yarnpkg.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz#c4ae002c61d2879e724581d96665583dbc1dc0e0"
integrity sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==
"@babel/helper-validator-option@^7.22.15", "@babel/helper-validator-option@^7.22.5":
"@babel/helper-validator-option@^7.22.15":
version "7.22.15"
resolved "https://registry.yarnpkg.com/@babel/helper-validator-option/-/helper-validator-option-7.22.15.tgz#694c30dfa1d09a6534cdfcafbe56789d36aba040"
integrity sha512-bMn7RmyFjY/mdECUbgn9eoSY4vqvacUnS9i9vGAGttgFWesO6B4CYWA7XlpbWgBt71iv/hfbPlynohStqnu5hA==
@ -253,7 +253,7 @@
"@babel/template" "^7.22.15"
"@babel/types" "^7.22.19"
"@babel/helpers@^7.22.10", "@babel/helpers@^7.23.2":
"@babel/helpers@^7.23.2":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/helpers/-/helpers-7.23.4.tgz#7d2cfb969aa43222032193accd7329851facf3c1"
integrity sha512-HfcMizYz10cr3h29VqyfGL6ZWIjTwWfvYBMsBVGwpcbhNGe3wQ1ZXZRPzZoAHhd9OqHadHqjQ89iVKINXnbzuw==
@ -262,7 +262,7 @@
"@babel/traverse" "^7.23.4"
"@babel/types" "^7.23.4"
"@babel/highlight@^7.22.10", "@babel/highlight@^7.22.13", "@babel/highlight@^7.23.4":
"@babel/highlight@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/highlight/-/highlight-7.23.4.tgz#edaadf4d8232e1a961432db785091207ead0621b"
integrity sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==
@ -271,7 +271,7 @@
chalk "^2.4.2"
js-tokens "^4.0.0"
"@babel/parser@^7.22.10", "@babel/parser@^7.22.15", "@babel/parser@^7.22.5", "@babel/parser@^7.23.0", "@babel/parser@^7.23.3", "@babel/parser@^7.23.4":
"@babel/parser@^7.22.15", "@babel/parser@^7.23.3", "@babel/parser@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/parser/-/parser-7.23.4.tgz#409fbe690c333bb70187e2de4021e1e47a026661"
integrity sha512-vf3Xna6UEprW+7t6EtOmFpHNAuxw3xqPZghy+brsnusscJRW5BMUzzHZc5ICjULee81WeUV2jjakG09MDglJXQ==
@ -1093,7 +1093,7 @@
dependencies:
regenerator-runtime "^0.14.0"
"@babel/template@^7.22.15", "@babel/template@^7.22.5":
"@babel/template@^7.22.15":
version "7.22.15"
resolved "https://registry.yarnpkg.com/@babel/template/-/template-7.22.15.tgz#09576efc3830f0430f4548ef971dde1350ef2f38"
integrity sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==
@ -1102,7 +1102,7 @@
"@babel/parser" "^7.22.15"
"@babel/types" "^7.22.15"
"@babel/traverse@^7.22.10", "@babel/traverse@^7.23.3", "@babel/traverse@^7.23.4":
"@babel/traverse@^7.23.3", "@babel/traverse@^7.23.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/traverse/-/traverse-7.23.4.tgz#c2790f7edf106d059a0098770fe70801417f3f85"
integrity sha512-IYM8wSUwunWTB6tFC2dkKZhxbIjHoWemdK+3f8/wq8aKhbUscxD5MX72ubd90fxvFknaLPeGw5ycU84V1obHJg==
@ -1118,7 +1118,7 @@
debug "^4.1.0"
globals "^11.1.0"
"@babel/types@^7.21.3", "@babel/types@^7.22.10", "@babel/types@^7.22.15", "@babel/types@^7.22.19", "@babel/types@^7.22.5", "@babel/types@^7.23.0", "@babel/types@^7.23.3", "@babel/types@^7.23.4", "@babel/types@^7.4.4":
"@babel/types@^7.21.3", "@babel/types@^7.22.15", "@babel/types@^7.22.19", "@babel/types@^7.22.5", "@babel/types@^7.23.0", "@babel/types@^7.23.3", "@babel/types@^7.23.4", "@babel/types@^7.4.4":
version "7.23.4"
resolved "https://registry.yarnpkg.com/@babel/types/-/types-7.23.4.tgz#7206a1810fc512a7f7f7d4dace4cb4c1c9dbfb8e"
integrity sha512-7uIFwVYpoplT5jp/kVv6EF93VaJ8H+Yn5IczYiaAi98ajzjfoZfslet/e0sLh+wVBjb2qqIut1b0S26VSafsSQ==
@ -2474,7 +2474,7 @@ camelcase@^6.2.0:
resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-6.3.0.tgz#5685b95eb209ac9c0c177467778c9c84df58ba9a"
integrity sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==
caniuse-lite@^1.0.30001517, caniuse-lite@^1.0.30001520, caniuse-lite@^1.0.30001541:
caniuse-lite@^1.0.30001520, caniuse-lite@^1.0.30001541:
version "1.0.30001565"
resolved "https://registry.yarnpkg.com/caniuse-lite/-/caniuse-lite-1.0.30001565.tgz#a528b253c8a2d95d2b415e11d8b9942acc100c4f"
integrity sha512-xrE//a3O7TP0vaJ8ikzkD2c2NgcVUvsEe2IvFTntV4Yd1Z9FVzh+gW+enX96L0psrbaFMcVcH2l90xNuGDWc8w==
@ -2587,11 +2587,6 @@ confusing-browser-globals@^1.0.11:
resolved "https://registry.yarnpkg.com/confusing-browser-globals/-/confusing-browser-globals-1.0.11.tgz#ae40e9b57cdd3915408a2805ebd3a5585608dc81"
integrity sha512-JsPKdmh8ZkmnHxDk55FZ1TqVLvEQTvoByJZRN9jzI0UjxK/QgAmsphz7PGtqgPieQZ/CQcHWXCR7ATDNhGe+YA==
convert-source-map@^1.7.0:
version "1.9.0"
resolved "https://registry.yarnpkg.com/convert-source-map/-/convert-source-map-1.9.0.tgz#7faae62353fb4213366d0ca98358d22e8368b05f"
integrity sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==
convert-source-map@^2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/convert-source-map/-/convert-source-map-2.0.0.tgz#4b560f649fc4e918dd0ab75cf4961e8bc882d82a"
@ -2772,7 +2767,7 @@ dot-case@^3.0.4:
no-case "^3.0.4"
tslib "^2.0.3"
electron-to-chromium@^1.4.477, electron-to-chromium@^1.4.535:
electron-to-chromium@^1.4.535:
version "1.4.596"
resolved "https://registry.yarnpkg.com/electron-to-chromium/-/electron-to-chromium-1.4.596.tgz#6752d1aa795d942d49dfc5d3764d6ea283fab1d7"
integrity sha512-zW3zbZ40Icb2BCWjm47nxwcFGYlIgdXkAx85XDO7cyky9J4QQfq8t0W19/TLZqq3JPQXtlv8BPIGmfa9Jb4scg==
@ -3323,7 +3318,7 @@ gensync@^1.0.0-beta.2:
resolved "https://registry.yarnpkg.com/gensync/-/gensync-1.0.0-beta.2.tgz#32a6ee76c3d7f52d46b2b1ae5d93fea8580a25e0"
integrity sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==
get-func-name@^2.0.0, get-func-name@^2.0.1, get-func-name@^2.0.2:
get-func-name@^2.0.1, get-func-name@^2.0.2:
version "2.0.2"
resolved "https://registry.yarnpkg.com/get-func-name/-/get-func-name-2.0.2.tgz#0d7cf20cd13fda808669ffa88f4ffc7a3943fc41"
integrity sha512-8vXOvuE167CtIc3OyItco7N/dpRtBbYOsPsXCz7X/PMnlGjYjSGuZJgM1Y7mmew7BKf9BqvLX2tnOVy1BBUsxQ==
@ -3486,13 +3481,6 @@ has-tostringtag@^1.0.0:
dependencies:
has-symbols "^1.0.2"
has@^1.0.3:
version "1.0.3"
resolved "https://registry.yarnpkg.com/has/-/has-1.0.3.tgz#722d7cbfc1f6aa8241f16dd814e011e1f41e8796"
integrity sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==
dependencies:
function-bind "^1.1.1"
hasown@^2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/hasown/-/hasown-2.0.0.tgz#f4c513d454a57b7c7e1650778de226b11700546c"
@ -4087,7 +4075,7 @@ mz@^2.7.0:
object-assign "^4.0.1"
thenify-all "^1.0.0"
nanoid@^3.3.6, nanoid@^3.3.7:
nanoid@^3.3.7:
version "3.3.7"
resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.7.tgz#d0c301a691bc8d54efa0a2226ccf3fe2fd656bd8"
integrity sha512-eSRppjcPIatRIMC1U6UngP8XFcz8MQWGQdt1MTBQ7NaAmvXDfvNxbvWV3x2y6CdEUciCSsDHDQZbhYaB8QEo2g==
@ -5121,7 +5109,7 @@ typescript@^5.3.3:
resolved "https://registry.yarnpkg.com/typescript/-/typescript-5.3.3.tgz#b3ce6ba258e72e6305ba66f5c9b452aaee3ffe37"
integrity sha512-pXWcraxM0uxAS+tN0AG/BF2TyqmHO014Z070UsJ+pFvYuRSq8KH8DmWpnbXe0pEPDHXZV3FcAbJkijJ5oNEnWw==
ufo@^1.1.2, ufo@^1.3.2:
ufo@^1.3.2:
version "1.4.0"
resolved "https://registry.yarnpkg.com/ufo/-/ufo-1.4.0.tgz#39845b31be81b4f319ab1d99fd20c56cac528d32"
integrity sha512-Hhy+BhRBleFjpJ2vchUNN40qgkh0366FWJGqVLYBHev0vpHTrXSA0ryT+74UiW6KWsldNurQMKGqCm1M2zBciQ==
@ -5169,7 +5157,7 @@ universalify@^0.2.0:
resolved "https://registry.yarnpkg.com/universalify/-/universalify-0.2.0.tgz#6451760566fa857534745ab1dde952d1b1761be0"
integrity sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg==
update-browserslist-db@^1.0.11, update-browserslist-db@^1.0.13:
update-browserslist-db@^1.0.13:
version "1.0.13"
resolved "https://registry.yarnpkg.com/update-browserslist-db/-/update-browserslist-db-1.0.13.tgz#3c5e4f5c083661bd38ef64b6328c26ed6c8248c4"
integrity sha512-xebP81SNcPuNpPP3uzeW1NYXxI3rxyJzF3pD6sH4jE7o/IX+WtSpwnVU+qIsDPyk0d3hmFQ7mjqc6AtV604hbg==
@ -5247,10 +5235,10 @@ vite-tsconfig-paths@^3.5.0:
recrawl-sync "^2.0.3"
tsconfig-paths "^4.0.0"
vite@^5.0.0, vite@^5.1.4:
version "5.1.4"
resolved "https://registry.yarnpkg.com/vite/-/vite-5.1.4.tgz#14e9d3e7a6e488f36284ef13cebe149f060bcfb6"
integrity sha512-n+MPqzq+d9nMVTKyewqw6kSt+R3CkvF9QAKY8obiQn8g1fwTscKxyfaYnC632HtBXAQGc1Yjomphwn1dtwGAHg==
vite@^5.0.0, vite@^5.1.7:
version "5.1.7"
resolved "https://registry.yarnpkg.com/vite/-/vite-5.1.7.tgz#9f685a2c4c70707fef6d37341b0e809c366da619"
integrity sha512-sgnEEFTZYMui/sTlH1/XEnVNHMujOahPLGMxn1+5sIT45Xjng1Ec1K78jRP15dSmVgg5WBin9yO81j3o9OxofA==
dependencies:
esbuild "^0.19.3"
postcss "^8.4.35"

@ -29,6 +29,7 @@ import (
"github.com/google/uuid"
"tailscale.com/clientupdate/distsign"
"tailscale.com/hostinfo"
"tailscale.com/types/logger"
"tailscale.com/util/cmpver"
"tailscale.com/util/winutil"
@ -162,9 +163,10 @@ func NewUpdater(args Arguments) (*Updater, error) {
type updateFunction func() error
func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
canAutoUpdate = !hostinfo.New().Container.EqualBool(true) // EqualBool(false) would return false if the value is not set.
switch runtime.GOOS {
case "windows":
return up.updateWindows, true
return up.updateWindows, canAutoUpdate
case "linux":
switch distro.Get() {
case distro.NixOS:
@ -178,20 +180,20 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
// auto-update mechanism.
return up.updateSynology, false
case distro.Debian: // includes Ubuntu
return up.updateDebLike, true
return up.updateDebLike, canAutoUpdate
case distro.Arch:
if up.archPackageInstalled() {
// Arch update func just prints a message about how to update,
// it doesn't support auto-updates.
return up.updateArchLike, false
}
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
case distro.Alpine:
return up.updateAlpineLike, true
return up.updateAlpineLike, canAutoUpdate
case distro.Unraid:
return up.updateUnraid, true
return up.updateUnraid, canAutoUpdate
case distro.QNAP:
return up.updateQNAP, true
return up.updateQNAP, canAutoUpdate
}
switch {
case haveExecutable("pacman"):
@ -200,21 +202,21 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
// it doesn't support auto-updates.
return up.updateArchLike, false
}
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
case haveExecutable("apt-get"): // TODO(awly): add support for "apt"
// The distro.Debian switch case above should catch most apt-based
// systems, but add this fallback just in case.
return up.updateDebLike, true
return up.updateDebLike, canAutoUpdate
case haveExecutable("dnf"):
return up.updateFedoraLike("dnf"), true
return up.updateFedoraLike("dnf"), canAutoUpdate
case haveExecutable("yum"):
return up.updateFedoraLike("yum"), true
return up.updateFedoraLike("yum"), canAutoUpdate
case haveExecutable("apk"):
return up.updateAlpineLike, true
return up.updateAlpineLike, canAutoUpdate
}
// If nothing matched, fall back to tarball updates.
if up.Update == nil {
return up.updateLinuxBinary, true
return up.updateLinuxBinary, canAutoUpdate
}
case "darwin":
switch {
@ -230,7 +232,7 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
return nil, false
}
case "freebsd":
return up.updateFreeBSD, true
return up.updateFreeBSD, canAutoUpdate
}
return nil, false
}
@ -436,7 +438,7 @@ func (up *Updater) updateDebLike() error {
return fmt.Errorf("apt-get update failed: %w; output:\n%s", err, out)
}
for i := 0; i < 2; i++ {
for range 2 {
out, err := exec.Command("apt-get", "install", "--yes", "--allow-downgrades", "tailscale="+ver).CombinedOutput()
if err != nil {
if !bytes.Contains(out, []byte(`dpkg was interrupted`)) {
@ -836,7 +838,7 @@ func (up *Updater) switchOutputToFile() (io.Closer, error) {
func (up *Updater) installMSI(msi string) error {
var err error
for tries := 0; tries < 2; tries++ {
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/promptrestart", "/qn")
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn")
cmd.Dir = filepath.Dir(msi)
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
@ -1017,6 +1019,20 @@ func (up *Updater) updateLinuxBinary() error {
return nil
}
func restartSystemdUnit(ctx context.Context) error {
if _, err := exec.LookPath("systemctl"); err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
if out, err := exec.Command("systemctl", "daemon-reload").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl daemon-reload failed: %w\noutput: %s", err, out)
}
if out, err := exec.Command("systemctl", "restart", "tailscaled.service").CombinedOutput(); err != nil {
return fmt.Errorf("systemctl restart failed: %w\noutput: %s", err, out)
}
return nil
}
func (up *Updater) downloadLinuxTarball(ver string) (string, error) {
dlDir, err := os.UserCacheDir()
if err != nil {
@ -1295,10 +1311,23 @@ func LatestTailscaleVersion(track string) (string, error) {
if err != nil {
return "", err
}
if latest.Version == "" {
return "", fmt.Errorf("no latest version found for %q track", track)
ver := latest.Version
switch runtime.GOOS {
case "windows":
ver = latest.MSIsVersion
case "darwin":
ver = latest.MacZipsVersion
case "linux":
ver = latest.TarballsVersion
if distro.Get() == distro.Synology {
ver = latest.SPKsVersion
}
}
if ver == "" {
return "", fmt.Errorf("no latest version found for OS %q on %q track", runtime.GOOS, track)
}
return latest.Version, nil
return ver, nil
}
type trackPackages struct {

@ -663,7 +663,7 @@ func genTarball(t *testing.T, path string, files map[string]string) {
func TestWriteFileOverwrite(t *testing.T) {
path := filepath.Join(t.TempDir(), "test")
for i := 0; i < 2; i++ {
for i := range 2 {
content := fmt.Sprintf("content %d", i)
if err := writeFile(strings.NewReader(content), path, 0600); err != nil {
t.Fatal(err)

@ -445,7 +445,7 @@ type testServer struct {
func newTestServer(t *testing.T) *testServer {
var roots []rootKeyPair
for i := 0; i < 3; i++ {
for range 3 {
roots = append(roots, newRootKeyPair(t))
}

@ -1,37 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package clientupdate
import (
"context"
"errors"
"fmt"
"github.com/coreos/go-systemd/v22/dbus"
)
func restartSystemdUnit(ctx context.Context) error {
c, err := dbus.NewWithContext(ctx)
if err != nil {
// Likely not a systemd-managed distro.
return errors.ErrUnsupported
}
defer c.Close()
if err := c.ReloadContext(ctx); err != nil {
return fmt.Errorf("failed to reload tailsacled.service: %w", err)
}
ch := make(chan string, 1)
if _, err := c.RestartUnitContext(ctx, "tailscaled.service", "replace", ch); err != nil {
return fmt.Errorf("failed to restart tailsacled.service: %w", err)
}
select {
case res := <-ch:
if res != "done" {
return fmt.Errorf("systemd service restart failed with result %q", res)
}
case <-ctx.Done():
return ctx.Err()
}
return nil
}

@ -1,15 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !linux
package clientupdate
import (
"context"
"errors"
)
func restartSystemdUnit(ctx context.Context) error {
return errors.ErrUnsupported
}

@ -102,7 +102,7 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("}")
writef("dst := new(%s)", name)
writef("*dst = *src")
for i := 0; i < t.NumFields(); i++ {
for i := range t.NumFields() {
fname := t.Field(i).Name()
ft := t.Field(i).Type()
if !codegen.ContainsPointers(ft) || codegen.HasNoClone(t.Tag(i)) {

@ -8,6 +8,7 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/http"
@ -18,20 +19,6 @@ import (
"tailscale.com/tailcfg"
)
// findKeyInKubeSecret inspects the kube secret secretName for a data
// field called "authkey", and returns its value if present.
func findKeyInKubeSecret(ctx context.Context, secretName string) (string, error) {
s, err := kc.GetSecret(ctx, secretName)
if err != nil {
return "", err
}
ak, ok := s.Data["authkey"]
if !ok {
return "", nil
}
return string(ak), nil
}
// storeDeviceInfo writes deviceID into the "device_id" data field of the kube
// secret secretName.
func storeDeviceInfo(ctx context.Context, secretName string, deviceID tailcfg.StableNodeID, fqdn string, addresses []netip.Prefix) error {
@ -88,9 +75,59 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
return nil
}
var kc *kube.Client
var kc kube.Client
// setupKube is responsible for doing any necessary configuration and checks to
// ensure that tailscale state storage and authentication mechanism will work on
// Kubernetes.
func (cfg *settings) setupKube(ctx context.Context) error {
if cfg.KubeSecret == "" {
return nil
}
canPatch, canCreate, err := kc.CheckSecretPermissions(ctx, cfg.KubeSecret)
if err != nil {
return fmt.Errorf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
s, err := kc.GetSecret(ctx, cfg.KubeSecret)
if err != nil && kube.IsNotFoundErr(err) && !canCreate {
return fmt.Errorf("Tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
} else if err != nil && !kube.IsNotFoundErr(err) {
return fmt.Errorf("Getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
if s == nil {
log.Print("TS_AUTHKEY not provided and kube secret does not exist, login will be interactive if needed.")
return nil
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
}
return nil
}
func initKube(root string) {
func initKubeClient(root string) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.

@ -0,0 +1,206 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"errors"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/kube"
)
func TestSetupKube(t *testing.T) {
tests := []struct {
name string
cfg *settings
wantErr bool
wantCfg *settings
kc kube.Client
}{
{
name: "TS_AUTHKEY set, state Secret exists",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, nil
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
},
{
name: "TS_AUTHKEY set, state Secret does not exist, we do not have permissions to create it",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to retrieve the state Secret",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 403}
},
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY set, we encounter a non-404 error when trying to check Secret permissions",
cfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, errors.New("broken")
},
},
wantErr: true,
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret does not exist, we have permissions to create it",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return nil, &kube.Status{Code: 404}
},
},
},
{
// Interactive login using URL in Pod logs
name: "TS_AUTHKEY not set, state Secret exists, but does not contain auth key",
cfg: &settings{
KubeSecret: "foo",
},
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{}, nil
},
},
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we do not have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
},
wantErr: true,
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kube.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return true, false, nil
},
GetSecretImpl: func(context.Context, string) (*kube.Secret, error) {
return &kube.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
wantCfg: &settings{
KubeSecret: "foo",
AuthKey: "foo",
KubernetesCanPatch: true,
},
},
}
for _, tt := range tests {
kc = tt.kc
t.Run(tt.name, func(t *testing.T) {
if err := tt.cfg.setupKube(context.Background()); (err != nil) != tt.wantErr {
t.Errorf("settings.setupKube() error = %v, wantErr %v", err, tt.wantErr)
}
if diff := cmp.Diff(*tt.cfg, *tt.wantCfg); diff != "" {
t.Errorf("unexpected contents of settings after running settings.setupKube()\n(-got +want):\n%s", diff)
}
})
}
}

@ -18,7 +18,11 @@
// previously advertised routes. To accept routes, use TS_EXTRA_ARGS to pass
// in --accept-routes.
// - TS_DEST_IP: proxy all incoming Tailscale traffic to the given
// destination.
// destination defined by an IP address.
// - TS_EXPERIMENTAL_DEST_DNS_NAME: proxy all incoming Tailscale traffic to the given
// destination defined by a DNS name. The DNS name will be periodically resolved and firewall rules updated accordingly.
// This is currently intended to be used by the Kubernetes operator (ExternalName Services).
// This is an experimental env var and will likely change in the future.
// - TS_TAILNET_TARGET_IP: proxy all incoming non-Tailscale traffic to the given
// destination defined by an IP.
// - TS_TAILNET_TARGET_FQDN: proxy all incoming non-Tailscale traffic to the given
@ -48,8 +52,10 @@
// ${TS_CERT_DOMAIN}, it will be replaced with the value of the available FQDN.
// It cannot be used in conjunction with TS_DEST_IP. The file is watched for changes,
// and will be re-applied when it changes.
// - EXPERIMENTAL_TS_CONFIGFILE_PATH: if specified, a path to tailscaled
// config. If this is set, TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY,
// - TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR: if specified, a path to a
// directory that containers tailscaled config in file. The config file needs to be
// named cap-<current-tailscaled-cap>.hujson. If this is set, TS_HOSTNAME,
// TS_EXTRA_ARGS, TS_AUTHKEY,
// TS_ROUTES, TS_ACCEPT_DNS env vars must not be set. If this is set,
// containerboot only runs `tailscaled --config <path-to-this-configfile>`
// and not `tailscale up` or `tailscale set`.
@ -82,12 +88,16 @@ import (
"fmt"
"io/fs"
"log"
"math"
"net"
"net/netip"
"os"
"os/exec"
"os/signal"
"path"
"path/filepath"
"reflect"
"slices"
"strconv"
"strings"
"sync"
@ -100,6 +110,7 @@ import (
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/ipn/conffile"
kubeutils "tailscale.com/k8s-operator"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/types/ptr"
@ -122,7 +133,8 @@ func main() {
Hostname: defaultEnv("TS_HOSTNAME", ""),
Routes: defaultEnvStringPointer("TS_ROUTES"),
ServeConfigPath: defaultEnv("TS_SERVE_CONFIG", ""),
ProxyTo: defaultEnv("TS_DEST_IP", ""),
ProxyTargetIP: defaultEnv("TS_DEST_IP", ""),
ProxyTargetDNSName: defaultEnv("TS_EXPERIMENTAL_DEST_DNS_NAME", ""),
TailnetTargetIP: defaultEnv("TS_TAILNET_TARGET_IP", ""),
TailnetTargetFQDN: defaultEnv("TS_TAILNET_TARGET_FQDN", ""),
DaemonExtraArgs: defaultEnv("TS_TAILSCALED_EXTRA_ARGS", ""),
@ -137,7 +149,7 @@ func main() {
Socket: defaultEnv("TS_SOCKET", "/tmp/tailscaled.sock"),
AuthOnce: defaultBool("TS_AUTH_ONCE", false),
Root: defaultEnv("TS_TEST_ONLY_ROOT", "/"),
TailscaledConfigFilePath: defaultEnv("EXPERIMENTAL_TS_CONFIGFILE_PATH", ""),
TailscaledConfigFilePath: tailscaledConfigFilePath(),
AllowProxyingClusterTrafficViaIngress: defaultBool("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS", false),
PodIP: defaultEnv("POD_IP", ""),
}
@ -150,8 +162,8 @@ func main() {
if err := ensureTunFile(cfg.Root); err != nil {
log.Fatalf("Unable to create tuntap device file: %v", err)
}
if cfg.ProxyTo != "" || cfg.Routes != nil || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" {
if err := ensureIPForwarding(cfg.Root, cfg.ProxyTo, cfg.TailnetTargetIP, cfg.TailnetTargetFQDN, cfg.Routes); err != nil {
if cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.Routes != nil || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" {
if err := ensureIPForwarding(cfg.Root, cfg.ProxyTargetIP, cfg.TailnetTargetIP, cfg.TailnetTargetFQDN, cfg.Routes); err != nil {
log.Printf("Failed to enable IP forwarding: %v", err)
log.Printf("To run tailscale as a proxy or router container, IP forwarding must be enabled.")
if cfg.InKubernetes {
@ -163,44 +175,16 @@ func main() {
}
}
if cfg.InKubernetes {
initKube(cfg.Root)
}
// Context is used for all setup stuff until we're in steady
// state, so that if something is hanging we eventually time out
// and crashloop the container.
bootCtx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if cfg.InKubernetes && cfg.KubeSecret != "" {
canPatch, err := kc.CheckSecretPermissions(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
key, err := findKeyInKubeSecret(bootCtx, cfg.KubeSecret)
if err != nil {
log.Fatalf("Getting authkey from kube secret: %v", err)
}
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
log.Fatalf("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
log.Print("Using authkey found in kube secret")
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
}
if cfg.InKubernetes {
initKubeClient(cfg.Root)
if err := cfg.setupKube(bootCtx); err != nil {
log.Fatalf("error setting up for running on Kubernetes: %v", err)
}
}
@ -341,7 +325,7 @@ authLoop:
}
var (
wantProxy = cfg.ProxyTo != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" || cfg.AllowProxyingClusterTrafficViaIngress
wantProxy = cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" || cfg.AllowProxyingClusterTrafficViaIngress
wantDeviceInfo = cfg.InKubernetes && cfg.KubeSecret != "" && cfg.KubernetesCanPatch
startupTasksDone = false
currentIPs deephash.Sum // tailscale IPs assigned to device
@ -349,6 +333,9 @@ authLoop:
currentEgressIPs deephash.Sum
addrs []netip.Prefix
backendAddrs []net.IP
certDomain = new(atomic.Pointer[string])
certDomainChanged = make(chan bool, 1)
)
@ -362,6 +349,44 @@ authLoop:
log.Fatalf("error creating new netfilter runner: %v", err)
}
}
// Setup for proxies that are configured to proxy to a target specified
// by a DNS name (TS_EXPERIMENTAL_DEST_DNS_NAME).
const defaultCheckPeriod = time.Minute * 10 // how often to check what IPs the DNS name resolves to
var (
tc = make(chan string, 1)
failedResolveAttempts int
t *time.Timer = time.AfterFunc(defaultCheckPeriod, func() {
if cfg.ProxyTargetDNSName != "" {
tc <- "recheck"
}
})
)
defer t.Stop()
// resetTimer resets timer for when to next attempt to resolve the DNS
// name for the proxy configured with TS_EXPERIMENTAL_DEST_DNS_NAME. The
// timer gets reset to 10 minutes from now unless the last resolution
// attempt failed. If one or more consecutive previous resolution
// attempts failed, the next resolution attempt will happen after the smallest
// of (10 minutes, 2 ^ number-of-consecutive-failed-resolution-attempts
// seconds) i.e 2s, 4s, 8s ... 10 minutes.
resetTimer := func(lastResolveFailed bool) {
if !lastResolveFailed {
log.Printf("reconfigureTimer: next DNS resolution attempt in %s", defaultCheckPeriod)
t.Reset(defaultCheckPeriod)
failedResolveAttempts = 0
return
}
minDelay := 2 // 2 seconds
nextTick := time.Second * time.Duration(math.Pow(float64(minDelay), float64(failedResolveAttempts)))
if nextTick > defaultCheckPeriod {
nextTick = defaultCheckPeriod // cap at 10 minutes
}
log.Printf("reconfigureTimer: last DNS resolution attempt failed, next DNS resolution attempt in %v", nextTick)
t.Reset(nextTick)
failedResolveAttempts++
}
notifyChan := make(chan ipn.Notify)
errChan := make(chan error)
go func() {
@ -399,7 +424,7 @@ runLoop:
log.Fatalf("tailscaled left running state (now in state %q), exiting", *n.State)
}
if n.NetMap != nil {
addrs := n.NetMap.SelfNode.Addresses().AsSlice()
addrs = n.NetMap.SelfNode.Addresses().AsSlice()
newCurrentIPs := deephash.Hash(&addrs)
ipsHaveChanged := newCurrentIPs != currentIPs
@ -425,7 +450,7 @@ runLoop:
egressAddrs = node.Addresses().AsSlice()
newCurentEgressIPs = deephash.Hash(&egressAddrs)
egressIPsHaveChanged = newCurentEgressIPs != currentEgressIPs
if egressIPsHaveChanged && len(egressAddrs) > 0 {
if egressIPsHaveChanged && len(egressAddrs) != 0 {
for _, egressAddr := range egressAddrs {
ea := egressAddr.Addr()
// TODO (irbekrm): make it work for IPv6 too.
@ -441,13 +466,32 @@ runLoop:
}
currentEgressIPs = newCurentEgressIPs
}
if cfg.ProxyTo != "" && len(addrs) > 0 && ipsHaveChanged {
if cfg.ProxyTargetIP != "" && len(addrs) != 0 && ipsHaveChanged {
log.Printf("Installing proxy rules")
if err := installIngressForwardingRule(ctx, cfg.ProxyTo, addrs, nfr); err != nil {
if err := installIngressForwardingRule(ctx, cfg.ProxyTargetIP, addrs, nfr); err != nil {
log.Fatalf("installing ingress proxy rules: %v", err)
}
}
if cfg.ServeConfigPath != "" && len(n.NetMap.DNS.CertDomains) > 0 {
if cfg.ProxyTargetDNSName != "" && len(addrs) != 0 && ipsHaveChanged {
newBackendAddrs, err := resolveDNS(ctx, cfg.ProxyTargetDNSName)
if err != nil {
log.Printf("[unexpected] error resolving DNS name %s: %v", cfg.ProxyTargetDNSName, err)
resetTimer(true)
continue
}
backendsHaveChanged := !(slices.EqualFunc(backendAddrs, newBackendAddrs, func(ip1 net.IP, ip2 net.IP) bool {
return slices.ContainsFunc(newBackendAddrs, func(ip net.IP) bool { return ip.Equal(ip1) })
}))
if backendsHaveChanged {
log.Printf("installing ingress proxy rules for backends %v", newBackendAddrs)
if err := installIngressForwardingRuleForDNSTarget(ctx, newBackendAddrs, addrs, nfr); err != nil {
log.Fatalf("error installing ingress proxy rules: %v", err)
}
}
resetTimer(false)
backendAddrs = newBackendAddrs
}
if cfg.ServeConfigPath != "" && len(n.NetMap.DNS.CertDomains) != 0 {
cd := n.NetMap.DNS.CertDomains[0]
prev := certDomain.Swap(ptr.To(cd))
if prev == nil || *prev != cd {
@ -457,7 +501,7 @@ runLoop:
}
}
}
if cfg.TailnetTargetIP != "" && ipsHaveChanged && len(addrs) > 0 {
if cfg.TailnetTargetIP != "" && ipsHaveChanged && len(addrs) != 0 {
log.Printf("Installing forwarding rules for destination %v", cfg.TailnetTargetIP)
if err := installEgressForwardingRule(ctx, cfg.TailnetTargetIP, addrs, nfr); err != nil {
log.Fatalf("installing egress proxy rules: %v", err)
@ -469,7 +513,7 @@ runLoop:
// enabled, set up proxy rule each time the
// tailnet IPs of this node change (including
// the first time they become available).
if cfg.AllowProxyingClusterTrafficViaIngress && cfg.ServeConfigPath != "" && ipsHaveChanged && len(addrs) > 0 {
if cfg.AllowProxyingClusterTrafficViaIngress && cfg.ServeConfigPath != "" && ipsHaveChanged && len(addrs) != 0 {
log.Printf("installing rules to forward traffic for %s to node's tailnet IP", cfg.PodIP)
if err := installTSForwardingRuleForDestination(ctx, cfg.PodIP, addrs, nfr); err != nil {
log.Fatalf("installing rules to forward traffic to node's tailnet IP: %v", err)
@ -491,32 +535,50 @@ runLoop:
log.Println("Startup complete, waiting for shutdown signal")
startupTasksDone = true
// Reap all processes, since we are PID1 and need to collect zombies. We can
// only start doing this once we've stopped shelling out to things
// `tailscale up`, otherwise this goroutine can reap the CLI subprocesses
// and wedge bringup.
// Wait on tailscaled process. It won't
// be cleaned up by default when the
// container exits as it is not PID1.
// TODO (irbekrm): perhaps we can
// replace the reaper by a running
// cmd.Wait in a goroutine immediately
// after starting tailscaled?
reaper := func() {
defer wg.Done()
for {
var status unix.WaitStatus
pid, err := unix.Wait4(-1, &status, 0, nil)
_, err := unix.Wait4(daemonProcess.Pid, &status, 0, nil)
if errors.Is(err, unix.EINTR) {
continue
}
if err != nil {
log.Fatalf("Waiting for exited processes: %v", err)
}
if pid == daemonProcess.Pid {
log.Printf("Tailscaled exited")
os.Exit(0)
log.Fatalf("Waiting for tailscaled to exit: %v", err)
}
log.Print("tailscaled exited")
os.Exit(0)
}
}
wg.Add(1)
go reaper()
}
}
case <-tc:
newBackendAddrs, err := resolveDNS(ctx, cfg.ProxyTargetDNSName)
if err != nil {
log.Printf("[unexpected] error resolving DNS name %s: %v", cfg.ProxyTargetDNSName, err)
resetTimer(true)
continue
}
backendsHaveChanged := !(slices.EqualFunc(backendAddrs, newBackendAddrs, func(ip1 net.IP, ip2 net.IP) bool {
return slices.ContainsFunc(newBackendAddrs, func(ip net.IP) bool { return ip.Equal(ip1) })
}))
if backendsHaveChanged && len(addrs) != 0 {
log.Printf("Backend address change detected, installing proxy rules for backends %v", newBackendAddrs)
if err := installIngressForwardingRuleForDNSTarget(ctx, newBackendAddrs, addrs, nfr); err != nil {
log.Fatalf("installing ingress proxy rules for DNS target %s: %v", cfg.ProxyTargetDNSName, err)
}
}
backendAddrs = newBackendAddrs
resetTimer(false)
}
}
wg.Wait()
@ -757,12 +819,12 @@ func ensureTunFile(root string) error {
}
// ensureIPForwarding enables IPv4/IPv6 forwarding for the container.
func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTargetFQDN string, routes *string) error {
func ensureIPForwarding(root, clusterProxyTargetIP, tailnetTargetIP, tailnetTargetFQDN string, routes *string) error {
var (
v4Forwarding, v6Forwarding bool
)
if clusterProxyTarget != "" {
proxyIP, err := netip.ParseAddr(clusterProxyTarget)
if clusterProxyTargetIP != "" {
proxyIP, err := netip.ParseAddr(clusterProxyTargetIP)
if err != nil {
return fmt.Errorf("invalid cluster destination IP: %v", err)
}
@ -772,8 +834,8 @@ func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTarget
v6Forwarding = true
}
}
if tailnetTargetiP != "" {
proxyIP, err := netip.ParseAddr(tailnetTargetiP)
if tailnetTargetIP != "" {
proxyIP, err := netip.ParseAddr(tailnetTargetIP)
if err != nil {
return fmt.Errorf("invalid tailnet destination IP: %v", err)
}
@ -801,7 +863,10 @@ func ensureIPForwarding(root, clusterProxyTarget, tailnetTargetiP, tailnetTarget
}
}
}
return enableIPForwarding(v4Forwarding, v6Forwarding, root)
}
func enableIPForwarding(v4Forwarding, v6Forwarding bool, root string) error {
var paths []string
if v4Forwarding {
paths = append(paths, filepath.Join(root, "proc/sys/net/ipv4/ip_forward"))
@ -896,16 +961,23 @@ func installIngressForwardingRule(ctx context.Context, dstStr string, tsIPs []ne
return err
}
var local netip.Addr
proxyHasIPv4Address := false
for _, pfx := range tsIPs {
if !pfx.IsSingleIP() {
continue
}
if pfx.Addr().Is4() {
proxyHasIPv4Address = true
}
if pfx.Addr().Is4() != dst.Is4() {
continue
}
local = pfx.Addr()
break
}
if proxyHasIPv4Address && dst.Is6() {
log.Printf("Warning: proxy backend ClusterIP is an IPv6 address and the proxy has a IPv4 tailnet address. You might need to disable IPv4 address allocation for the proxy for forwarding to work. See https://github.com/tailscale/tailscale/issues/12156")
}
if !local.IsValid() {
return fmt.Errorf("no tailscale IP matching family of %s found in %v", dstStr, tsIPs)
}
@ -918,15 +990,89 @@ func installIngressForwardingRule(ctx context.Context, dstStr string, tsIPs []ne
return nil
}
func installIngressForwardingRuleForDNSTarget(ctx context.Context, backendAddrs []net.IP, tsIPs []netip.Prefix, nfr linuxfw.NetfilterRunner) error {
var (
tsv4 netip.Addr
tsv6 netip.Addr
v4Backends []netip.Addr
v6Backends []netip.Addr
)
for _, pfx := range tsIPs {
if pfx.IsSingleIP() && pfx.Addr().Is4() {
tsv4 = pfx.Addr()
continue
}
if pfx.IsSingleIP() && pfx.Addr().Is6() {
tsv6 = pfx.Addr()
continue
}
}
// TODO: log if more than one backend address is found and firewall is
// in nftables mode that only the first IP will be used.
for _, ip := range backendAddrs {
if ip.To4() != nil {
v4Backends = append(v4Backends, netip.AddrFrom4([4]byte(ip.To4())))
}
if ip.To16() != nil {
v6Backends = append(v6Backends, netip.AddrFrom16([16]byte(ip.To16())))
}
}
// Enable IP forwarding here as opposed to at the start of containerboot
// as the IPv4/IPv6 requirements might have changed.
// For Kubernetes operator proxies, forwarding for both IPv4 and IPv6 is
// enabled by an init container, so in practice enabling forwarding here
// is only needed if this proxy has been configured by manually setting
// TS_EXPERIMENTAL_DEST_DNS_NAME env var for a containerboot instance.
if err := enableIPForwarding(len(v4Backends) != 0, len(v6Backends) != 0, ""); err != nil {
log.Printf("[unexpected] failed to ensure IP forwarding: %v", err)
}
updateFirewall := func(dst netip.Addr, backendTargets []netip.Addr) error {
if err := nfr.DNATWithLoadBalancer(dst, backendTargets); err != nil {
return fmt.Errorf("installing DNAT rules for ingress backends %+#v: %w", backendTargets, err)
}
// The backend might advertize MSS higher than that of the
// tailscale interfaces. Clamp MSS of packets going out via
// tailscale0 interface to its MTU to prevent broken connections
// in environments where path MTU discovery is not working.
if err := nfr.ClampMSSToPMTU("tailscale0", dst); err != nil {
return fmt.Errorf("adding rule to clamp traffic via tailscale0: %v", err)
}
return nil
}
if len(v4Backends) != 0 {
if !tsv4.IsValid() {
log.Printf("backend targets %v contain at least one IPv4 address, but this node's Tailscale IPs do not contain a valid IPv4 address: %v", backendAddrs, tsIPs)
} else if err := updateFirewall(tsv4, v4Backends); err != nil {
return fmt.Errorf("Installing IPv4 firewall rules: %w", err)
}
}
if len(v6Backends) != 0 && !tsv6.IsValid() {
if !tsv6.IsValid() {
log.Printf("backend targets %v contain at least one IPv6 address, but this node's Tailscale IPs do not contain a valid IPv6 address: %v", backendAddrs, tsIPs)
} else if !nfr.HasIPV6NAT() {
log.Printf("backend targets %v contain at least one IPv6 address, but the chosen firewall mode does not support IPv6 NAT", backendAddrs)
} else if err := updateFirewall(tsv6, v6Backends); err != nil {
return fmt.Errorf("Installing IPv6 firewall rules: %w", err)
}
}
return nil
}
// settings is all the configuration for containerboot.
type settings struct {
AuthKey string
Hostname string
Routes *string
// ProxyTo is the destination IP to which all incoming
// ProxyTargetIP is the destination IP to which all incoming
// Tailscale traffic should be proxied. If empty, no proxying
// is done. This is typically a locally reachable IP.
ProxyTo string
ProxyTargetIP string
// ProxyTargetDNSName is a DNS name to whose backing IP addresses all
// incoming Tailscale traffic should be proxied.
ProxyTargetDNSName string
// TailnetTargetIP is the destination IP to which all incoming
// non-Tailscale traffic should be proxied. This is typically a
// Tailscale IP.
@ -962,13 +1108,26 @@ type settings struct {
func (s *settings) validate() error {
if s.TailscaledConfigFilePath != "" {
dir, file := path.Split(s.TailscaledConfigFilePath)
if _, err := os.Stat(dir); err != nil {
return fmt.Errorf("error validating whether directory with tailscaled config file %s exists: %w", dir, err)
}
if _, err := os.Stat(s.TailscaledConfigFilePath); err != nil {
return fmt.Errorf("error validating whether tailscaled config directory %q contains tailscaled config for current capability version %q: %w. If this is a Tailscale Kubernetes operator proxy, please ensure that the version of the operator is not older than the version of the proxy", dir, file, err)
}
if _, err := conffile.Load(s.TailscaledConfigFilePath); err != nil {
return fmt.Errorf("error validating tailscaled configfile contents: %w", err)
}
}
if s.ProxyTo != "" && s.UserspaceMode {
if s.ProxyTargetIP != "" && s.UserspaceMode {
return errors.New("TS_DEST_IP is not supported with TS_USERSPACE")
}
if s.ProxyTargetDNSName != "" && s.UserspaceMode {
return errors.New("TS_EXPERIMENTAL_DEST_DNS_NAME is not supported with TS_USERSPACE")
}
if s.ProxyTargetDNSName != "" && s.ProxyTargetIP != "" {
return errors.New("TS_EXPERIMENTAL_DEST_DNS_NAME and TS_DEST_IP cannot both be set")
}
if s.TailnetTargetIP != "" && s.UserspaceMode {
return errors.New("TS_TAILNET_TARGET_IP is not supported with TS_USERSPACE")
}
@ -979,7 +1138,7 @@ func (s *settings) validate() error {
return errors.New("Both TS_TAILNET_TARGET_IP and TS_TAILNET_FQDN cannot be set")
}
if s.TailscaledConfigFilePath != "" && (s.AcceptDNS != nil || s.AuthKey != "" || s.Routes != nil || s.ExtraArgs != "" || s.Hostname != "") {
return errors.New("EXPERIMENTAL_TS_CONFIGFILE_PATH cannot be set in combination with TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY, TS_ROUTES, TS_ACCEPT_DNS.")
return errors.New("TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR cannot be set in combination with TS_HOSTNAME, TS_EXTRA_ARGS, TS_AUTHKEY, TS_ROUTES, TS_ACCEPT_DNS.")
}
if s.AllowProxyingClusterTrafficViaIngress && s.UserspaceMode {
return errors.New("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS is not supported in userspace mode")
@ -993,6 +1152,28 @@ func (s *settings) validate() error {
return nil
}
func resolveDNS(ctx context.Context, name string) ([]net.IP, error) {
// TODO (irbekrm): look at using recursive.Resolver instead to resolve
// the DNS names as well as retrieve TTLs. It looks though that this
// seems to return very short TTLs (shorter than on the actual records).
ip4s, err := net.DefaultResolver.LookupIP(ctx, "ip4", name)
if err != nil {
if e, ok := err.(*net.DNSError); !(ok && e.IsNotFound) {
return nil, fmt.Errorf("error looking up IPv4 addresses: %v", err)
}
}
ip6s, err := net.DefaultResolver.LookupIP(ctx, "ip6", name)
if err != nil {
if e, ok := err.(*net.DNSError); !(ok && e.IsNotFound) {
return nil, fmt.Errorf("error looking up IPv6 addresses: %v", err)
}
}
if len(ip4s) == 0 && len(ip6s) == 0 {
return nil, fmt.Errorf("no IPv4 or IPv6 addresses found for host: %s", name)
}
return append(ip4s, ip6s...), nil
}
// defaultEnv returns the value of the given envvar name, or defVal if
// unset.
func defaultEnv(name, defVal string) string {
@ -1089,3 +1270,42 @@ func isTwoStepConfigAlwaysAuth(cfg *settings) bool {
func isOneStepConfig(cfg *settings) bool {
return cfg.TailscaledConfigFilePath != ""
}
// tailscaledConfigFilePath returns the path to the tailscaled config file that
// should be used for the current capability version. It is determined by the
// TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR environment variable and looks for a
// file named cap-<capability_version>.hujson in the directory. It searches for
// the highest capability version that is less than or equal to the current
// capability version.
func tailscaledConfigFilePath() string {
dir := os.Getenv("TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR")
if dir == "" {
return ""
}
fe, err := os.ReadDir(dir)
if err != nil {
log.Fatalf("error reading tailscaled config directory %q: %v", dir, err)
}
maxCompatVer := tailcfg.CapabilityVersion(-1)
for _, e := range fe {
// We don't check if type if file as in most cases this will
// come from a mounted kube Secret, where the directory contents
// will be various symlinks.
if e.Type().IsDir() {
continue
}
cv, err := kubeutils.CapVerFromFileName(e.Name())
if err != nil {
log.Printf("skipping file %q in tailscaled config directory %q: %v", e.Name(), dir, err)
continue
}
if cv > maxCompatVer && cv <= tailcfg.CurrentCapabilityVersion {
maxCompatVer = cv
}
}
if maxCompatVer == -1 {
log.Fatalf("no tailscaled config file found in %q for current capability version %q", dir, tailcfg.CurrentCapabilityVersion)
}
log.Printf("Using tailscaled config file %q for capability version %q", maxCompatVer, tailcfg.CurrentCapabilityVersion)
return path.Join(dir, kubeutils.TailscaledConfigFileNameForCap(maxCompatVer))
}

@ -65,7 +65,7 @@ func TestContainerBoot(t *testing.T) {
"dev/net",
"proc/sys/net/ipv4",
"proc/sys/net/ipv6/conf/all",
"etc",
"etc/tailscaled",
}
for _, path := range dirs {
if err := os.MkdirAll(filepath.Join(d, path), 0700); err != nil {
@ -80,7 +80,7 @@ func TestContainerBoot(t *testing.T) {
"dev/net/tun": []byte(""),
"proc/sys/net/ipv4/ip_forward": []byte("0"),
"proc/sys/net/ipv6/conf/all/forwarding": []byte("0"),
"etc/tailscaled": tailscaledConfBytes,
"etc/tailscaled/cap-95.hujson": tailscaledConfBytes,
}
resetFiles := func() {
for path, content := range files {
@ -638,14 +638,14 @@ func TestContainerBoot(t *testing.T) {
},
},
{
Name: "experimental tailscaled configfile",
Name: "experimental tailscaled config path",
Env: map[string]string{
"EXPERIMENTAL_TS_CONFIGFILE_PATH": filepath.Join(d, "etc/tailscaled"),
"TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR": filepath.Join(d, "etc/tailscaled/"),
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking --config=/etc/tailscaled",
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking --config=/etc/tailscaled/cap-95.hujson",
},
}, {
Notify: runningNotify,

@ -13,6 +13,7 @@ import (
"testing"
"tailscale.com/tstest"
"tailscale.com/tstest/nettest"
)
func BenchmarkHandleBootstrapDNS(b *testing.B) {
@ -55,6 +56,8 @@ func getBootstrapDNS(t *testing.T, q string) dnsEntryMap {
}
func TestUnpublishedDNS(t *testing.T) {
nettest.SkipIfNoNetwork(t)
const published = "login.tailscale.com"
const unpublished = "log.tailscale.io"

@ -17,10 +17,10 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
L github.com/google/nftables/expr from github.com/google/nftables+
L github.com/google/nftables/internal/parseexprfunc from github.com/google/nftables+
L github.com/google/nftables/xt from github.com/google/nftables/expr+
github.com/google/uuid from tailscale.com/tsweb
github.com/google/uuid from tailscale.com/util/fastuuid
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
L 💣 github.com/mdlayher/netlink from github.com/google/nftables+
L 💣 github.com/mdlayher/netlink/nlenc from github.com/jsimonetti/rtnetlink+
@ -47,13 +47,14 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/x448/float16 from github.com/fxamacker/cbor/v2
💣 go4.org/mem from tailscale.com/client/tailscale+
go4.org/netipx from tailscale.com/net/tsaddr+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/interfaces+
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/netmon+
google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt
google.golang.org/protobuf/encoding/prototext from github.com/prometheus/common/expfmt+
google.golang.org/protobuf/encoding/protowire from google.golang.org/protobuf/encoding/protodelim+
google.golang.org/protobuf/internal/descfmt from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/descopts from google.golang.org/protobuf/internal/filedesc+
google.golang.org/protobuf/internal/detrand from google.golang.org/protobuf/internal/descfmt+
google.golang.org/protobuf/internal/editiondefaults from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/encoding/defval from google.golang.org/protobuf/internal/encoding/tag+
google.golang.org/protobuf/internal/encoding/messageset from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/encoding/tag from google.golang.org/protobuf/internal/impl
@ -86,19 +87,19 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/derp from tailscale.com/cmd/derper+
tailscale.com/derp/derphttp from tailscale.com/cmd/derper
tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/health from tailscale.com/net/tlsdial
tailscale.com/hostinfo from tailscale.com/net/interfaces+
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/hostinfo from tailscale.com/net/netmon+
tailscale.com/ipn from tailscale.com/client/tailscale
tailscale.com/ipn/ipnstate from tailscale.com/client/tailscale+
tailscale.com/metrics from tailscale.com/cmd/derper+
tailscale.com/net/dnscache from tailscale.com/derp/derphttp
tailscale.com/net/flowtrack from tailscale.com/net/packet+
💣 tailscale.com/net/interfaces from tailscale.com/net/netmon+
tailscale.com/net/ktimeout from tailscale.com/cmd/derper
tailscale.com/net/netaddr from tailscale.com/ipn+
tailscale.com/net/netknob from tailscale.com/net/netns
tailscale.com/net/netmon from tailscale.com/derp/derphttp+
💣 tailscale.com/net/netmon from tailscale.com/derp/derphttp+
tailscale.com/net/netns from tailscale.com/derp/derphttp
tailscale.com/net/netutil from tailscale.com/client/tailscale
tailscale.com/net/packet from tailscale.com/wgengine/filter
@ -114,9 +115,8 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
💣 tailscale.com/safesocket from tailscale.com/client/tailscale
tailscale.com/syncs from tailscale.com/cmd/derper+
tailscale.com/tailcfg from tailscale.com/client/tailscale+
tailscale.com/tailfs from tailscale.com/client/tailscale+
tailscale.com/tka from tailscale.com/client/tailscale+
W tailscale.com/tsconst from tailscale.com/net/interfaces
W tailscale.com/tsconst from tailscale.com/net/netmon
tailscale.com/tstime from tailscale.com/derp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/derp+
@ -137,16 +137,18 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/ipn+
tailscale.com/util/cibuild from tailscale.com/health
tailscale.com/util/clientmetric from tailscale.com/net/netmon+
tailscale.com/util/cloudenv from tailscale.com/hostinfo+
W tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy
tailscale.com/util/ctxkey from tailscale.com/tsweb+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/hostinfo+
tailscale.com/util/fastuuid from tailscale.com/tsweb
tailscale.com/util/httpm from tailscale.com/client/tailscale
tailscale.com/util/lineread from tailscale.com/hostinfo+
L tailscale.com/util/linuxfw from tailscale.com/net/netns
tailscale.com/util/mak from tailscale.com/net/interfaces+
tailscale.com/util/mak from tailscale.com/health+
tailscale.com/util/multierr from tailscale.com/health+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/set from tailscale.com/derp+
@ -155,6 +157,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/util/syspolicy from tailscale.com/ipn
tailscale.com/util/vizerror from tailscale.com/tailcfg+
W 💣 tailscale.com/util/winutil from tailscale.com/hostinfo+
W 💣 tailscale.com/util/winutil/winenv from tailscale.com/hostinfo
tailscale.com/version from tailscale.com/derp+
tailscale.com/version/distro from tailscale.com/envknob+
tailscale.com/wgengine/filter from tailscale.com/types/netmap
@ -250,6 +253,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from github.com/mdlayher/netlink+
math/rand/v2 from tailscale.com/util/fastuuid
mime from github.com/prometheus/common/expfmt+
mime/multipart from net/http
mime/quotedprintable from mime/multipart

@ -36,19 +36,21 @@ import (
"tailscale.com/tsweb"
"tailscale.com/types/key"
"tailscale.com/types/logger"
"tailscale.com/version"
)
var (
dev = flag.Bool("dev", false, "run in localhost development mode (overrides -a)")
addr = flag.String("a", ":443", "server HTTP/HTTPS listen address, in form \":port\", \"ip:port\", or for IPv6 \"[ip]:port\". If the IP is omitted, it defaults to all interfaces. Serves HTTPS if the port is 443 and/or -certmode is manual, otherwise HTTP.")
httpPort = flag.Int("http-port", 80, "The port on which to serve HTTP. Set to -1 to disable. The listener is bound to the same IP (if any) as specified in the -a flag.")
stunPort = flag.Int("stun-port", 3478, "The UDP port on which to serve STUN. The listener is bound to the same IP (if any) as specified in the -a flag.")
configPath = flag.String("c", "", "config file path")
certMode = flag.String("certmode", "letsencrypt", "mode for getting a cert. possible options: manual, letsencrypt")
certDir = flag.String("certdir", tsweb.DefaultCertDir("derper-certs"), "directory to store LetsEncrypt certs, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443")
runSTUN = flag.Bool("stun", true, "whether to run a STUN server. It will bind to the same IP (if any) as the --addr flag value.")
runDERP = flag.Bool("derp", true, "whether to run a DERP server. The only reason to set this false is if you're decommissioning a server but want to keep its bootstrap DNS functionality still running.")
dev = flag.Bool("dev", false, "run in localhost development mode (overrides -a)")
versionFlag = flag.Bool("version", false, "print version and exit")
addr = flag.String("a", ":443", "server HTTP/HTTPS listen address, in form \":port\", \"ip:port\", or for IPv6 \"[ip]:port\". If the IP is omitted, it defaults to all interfaces. Serves HTTPS if the port is 443 and/or -certmode is manual, otherwise HTTP.")
httpPort = flag.Int("http-port", 80, "The port on which to serve HTTP. Set to -1 to disable. The listener is bound to the same IP (if any) as specified in the -a flag.")
stunPort = flag.Int("stun-port", 3478, "The UDP port on which to serve STUN. The listener is bound to the same IP (if any) as specified in the -a flag.")
configPath = flag.String("c", "", "config file path")
certMode = flag.String("certmode", "letsencrypt", "mode for getting a cert. possible options: manual, letsencrypt")
certDir = flag.String("certdir", tsweb.DefaultCertDir("derper-certs"), "directory to store LetsEncrypt certs, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443")
runSTUN = flag.Bool("stun", true, "whether to run a STUN server. It will bind to the same IP (if any) as the --addr flag value.")
runDERP = flag.Bool("derp", true, "whether to run a DERP server. The only reason to set this false is if you're decommissioning a server but want to keep its bootstrap DNS functionality still running.")
meshPSKFile = flag.String("mesh-psk-file", defaultMeshPSKFile(), "if non-empty, path to file containing the mesh pre-shared key file. It should contain some hex string; whitespace is trimmed.")
meshWith = flag.String("mesh-with", "", "optional comma-separated list of hostnames to mesh with; the server's own hostname can be in the list")
@ -129,6 +131,10 @@ func writeNewConfig() config {
func main() {
flag.Parse()
if *versionFlag {
fmt.Println(version.Long())
return
}
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
@ -185,7 +191,12 @@ func main() {
http.Error(w, "derp server disabled", http.StatusNotFound)
}))
}
mux.HandleFunc("/derp/probe", probeHandler)
// These two endpoints are the same. Different versions of the clients
// have assumes different paths over time so we support both.
mux.HandleFunc("/derp/probe", derphttp.ProbeHandler)
mux.HandleFunc("/derp/latency-check", derphttp.ProbeHandler)
go refreshBootstrapDNSLoop()
mux.HandleFunc("/bootstrap-dns", tsweb.BrowserHeaderHandlerFunc(handleBootstrapDNS))
mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -364,17 +375,6 @@ func isChallengeChar(c rune) bool {
c == '.' || c == '-' || c == '_'
}
// probeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func probeHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "HEAD", "GET":
w.Header().Set("Access-Control-Allow-Origin", "*")
default:
http.Error(w, "bogus probe method", http.StatusMethodNotAllowed)
}
}
var validProdHostname = regexp.MustCompile(`^derp([^.]*)\.tailscale\.com\.?$`)
func prodAutocertHostPolicy(_ context.Context, host string) error {

@ -15,6 +15,7 @@ import (
"tailscale.com/derp"
"tailscale.com/derp/derphttp"
"tailscale.com/net/netmon"
"tailscale.com/types/key"
"tailscale.com/types/logger"
)
@ -36,7 +37,8 @@ func startMesh(s *derp.Server) error {
func startMeshWithHost(s *derp.Server, host string) error {
logf := logger.WithPrefix(log.Printf, fmt.Sprintf("mesh(%q): ", host))
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf)
netMon := netmon.NewStatic() // good enough for cmd/derper; no need for netns fanciness
c, err := derphttp.NewClient(s.PrivateKey(), "https://"+host+"/derp", logf, netMon)
if err != nil {
return err
}

@ -16,10 +16,12 @@ import (
"tailscale.com/prober"
"tailscale.com/tsweb"
"tailscale.com/version"
)
var (
derpMapURL = flag.String("derp-map", "https://login.tailscale.com/derpmap/default", "URL to DERP map (https:// or file://)")
versionFlag = flag.Bool("version", false, "print version and exit")
listen = flag.String("listen", ":8030", "HTTP listen address")
probeOnce = flag.Bool("once", false, "probe once and print results, then exit; ignores the listen flag")
spread = flag.Bool("spread", true, "whether to spread probing over time")
@ -33,6 +35,10 @@ var (
func main() {
flag.Parse()
if *versionFlag {
fmt.Println(version.Long())
return
}
p := prober.New().WithSpread(*spread).WithOnce(*probeOnce).WithMetricNamespace("derpprobe")
opts := []prober.DERPOpt{

13
cmd/dist/dist.go vendored

@ -13,11 +13,16 @@ import (
"tailscale.com/release/dist"
"tailscale.com/release/dist/cli"
"tailscale.com/release/dist/qnap"
"tailscale.com/release/dist/synology"
"tailscale.com/release/dist/unixpkgs"
)
var synologyPackageCenter bool
var (
synologyPackageCenter bool
qnapPrivateKeyPath string
qnapCertificatePath string
)
func getTargets() ([]dist.Target, error) {
var ret []dist.Target
@ -37,6 +42,10 @@ func getTargets() ([]dist.Target, error) {
// To build for package center, run
// ./tool/go run ./cmd/dist build --synology-package-center synology
ret = append(ret, synology.Targets(synologyPackageCenter, nil)...)
if (qnapPrivateKeyPath == "") != (qnapCertificatePath == "") {
return nil, errors.New("both --qnap-private-key-path and --qnap-certificate-path must be set")
}
ret = append(ret, qnap.Targets(qnapPrivateKeyPath, qnapCertificatePath)...)
return ret, nil
}
@ -45,6 +54,8 @@ func main() {
for _, subcmd := range cmd.Subcommands {
if subcmd.Name == "build" {
subcmd.FlagSet.BoolVar(&synologyPackageCenter, "synology-package-center", false, "build synology packages with extra metadata for the official package center")
subcmd.FlagSet.StringVar(&qnapPrivateKeyPath, "qnap-private-key-path", "", "sign qnap packages with given key (must also provide --qnap-certificate-path)")
subcmd.FlagSet.StringVar(&qnapCertificatePath, "qnap-certificate-path", "", "sign qnap packages with given certificate (must also provide --qnap-private-key-path)")
}
}

@ -0,0 +1,354 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// k8s-nameserver is a simple nameserver implementation meant to be used with
// k8s-operator to allow to resolve magicDNS names associated with tailnet
// proxies in cluster.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net"
"os"
"os/signal"
"path/filepath"
"sync"
"syscall"
"github.com/fsnotify/fsnotify"
"github.com/miekg/dns"
operatorutils "tailscale.com/k8s-operator"
"tailscale.com/util/dnsname"
)
const (
// tsNetDomain is the domain that this DNS nameserver has registered a handler for.
tsNetDomain = "ts.net"
// addr is the the address that the UDP and TCP listeners will listen on.
addr = ":1053"
// The following constants are specific to the nameserver configuration
// provided by a mounted Kubernetes Configmap. The Configmap mounted at
// /config is the only supported way for configuring this nameserver.
defaultDNSConfigDir = "/config"
kubeletMountedConfigLn = "..data"
)
// nameserver is a simple nameserver that responds to DNS queries for A records
// for ts.net domain names over UDP or TCP. It serves DNS responses from
// in-memory IPv4 host records. It is intended to be deployed on Kubernetes with
// a ConfigMap mounted at /config that should contain the host records. It
// dynamically reconfigures its in-memory mappings as the contents of the
// mounted ConfigMap changes.
type nameserver struct {
// configReader returns the latest desired configuration (host records)
// for the nameserver. By default it gets set to a reader that reads
// from a Kubernetes ConfigMap mounted at /config, but this can be
// overridden in tests.
configReader configReaderFunc
// configWatcher is a watcher that returns an event when the desired
// configuration has changed and the nameserver should update the
// in-memory records.
configWatcher <-chan string
mu sync.Mutex // protects following
// ip4 are the in-memory hostname -> IP4 mappings that the nameserver
// uses to respond to A record queries.
ip4 map[dnsname.FQDN][]net.IP
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
// Ensure that we watch the kube Configmap mounted at /config for
// nameserver configuration updates and send events when updates happen.
c := ensureWatcherForKubeConfigMap(ctx)
ns := &nameserver{
configReader: configMapConfigReader,
configWatcher: c,
}
// Ensure that in-memory records get set up to date now and will get
// reset when the configuration changes.
ns.runRecordsReconciler(ctx)
// Register a DNS server handle for ts.net domain names. Not having a
// handle registered for any other domain names is how we enforce that
// this nameserver can only be used for ts.net domains - querying any
// other domain names returns Rcode Refused.
dns.HandleFunc(tsNetDomain, ns.handleFunc())
// Listen for DNS queries over UDP and TCP.
udpSig := make(chan os.Signal)
tcpSig := make(chan os.Signal)
go listenAndServe("udp", addr, udpSig)
go listenAndServe("tcp", addr, tcpSig)
sig := make(chan os.Signal, 1)
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM)
s := <-sig
log.Printf("OS signal (%s) received, shutting down", s)
cancel() // exit the records reconciler and configmap watcher goroutines
udpSig <- s // stop the UDP listener
tcpSig <- s // stop the TCP listener
}
// handleFunc is a DNS query handler that can respond to A record queries from
// the nameserver's in-memory records.
// - If an A record query is received and the
// nameserver's in-memory records contain records for the queried domain name,
// return a success response.
// - If an A record query is received, but the
// nameserver's in-memory records do not contain records for the queried domain name,
// return NXDOMAIN.
// - If an A record query is received, but the queried domain name is not valid, return Format Error.
// - If a query is received for any other record type than A, return Not Implemented.
func (n *nameserver) handleFunc() func(w dns.ResponseWriter, r *dns.Msg) {
h := func(w dns.ResponseWriter, r *dns.Msg) {
m := new(dns.Msg)
defer func() {
w.WriteMsg(m)
}()
if len(r.Question) < 1 {
log.Print("[unexpected] nameserver received a request with no questions")
m = r.SetRcodeFormatError(r)
return
}
// TODO (irbekrm): maybe set message compression
switch r.Question[0].Qtype {
case dns.TypeA:
q := r.Question[0].Name
fqdn, err := dnsname.ToFQDN(q)
if err != nil {
m = r.SetRcodeFormatError(r)
return
}
// The only supported use of this nameserver is as a
// single source of truth for MagicDNS names by
// non-tailnet Kubernetes workloads.
m.Authoritative = true
m.RecursionAvailable = false
ips := n.lookupIP4(fqdn)
if ips == nil || len(ips) == 0 {
// As we are the authoritative nameserver for MagicDNS
// names, if we do not have a record for this MagicDNS
// name, it does not exist.
m = m.SetRcode(r, dns.RcodeNameError)
return
}
// TODO (irbekrm): TTL is currently set to 0, meaning
// that cluster workloads will not cache the DNS
// records. Revisit this in future when we understand
// the usage patterns better- is it putting too much
// load on kube DNS server or is this fine?
for _, ip := range ips {
rr := &dns.A{Hdr: dns.RR_Header{Name: q, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}, A: ip}
m.SetRcode(r, dns.RcodeSuccess)
m.Answer = append(m.Answer, rr)
}
case dns.TypeAAAA:
// TODO (irbekrm): implement IPv6 support.
// Kubernetes distributions that I am most familiar with
// default to IPv4 for Pod CIDR ranges and often many cases don't
// support IPv6 at all, so this should not be crucial for now.
fallthrough
default:
log.Printf("[unexpected] nameserver received a query for an unsupported record type: %s", r.Question[0].String())
m.SetRcode(r, dns.RcodeNotImplemented)
}
}
return h
}
// runRecordsReconciler ensures that nameserver's in-memory records are
// reset when the provided configuration changes.
func (n *nameserver) runRecordsReconciler(ctx context.Context) {
log.Print("updating nameserver's records from the provided configuration...")
if err := n.resetRecords(); err != nil { // ensure records are up to date before the nameserver starts
log.Fatalf("error setting nameserver's records: %v", err)
}
log.Print("nameserver's records were updated")
go func() {
for {
select {
case <-ctx.Done():
log.Printf("context cancelled, exiting records reconciler")
return
case <-n.configWatcher:
log.Print("configuration update detected, resetting records")
if err := n.resetRecords(); err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("error resetting records: %v", err)
}
log.Print("nameserver records were reset")
}
}
}()
}
// resetRecords sets the in-memory DNS records of this nameserver from the
// provided configuration. It does not check for the diff, so the caller is
// expected to ensure that this is only called when reset is needed.
func (n *nameserver) resetRecords() error {
dnsCfgBytes, err := n.configReader()
if err != nil {
log.Printf("error reading nameserver's configuration: %v", err)
return err
}
if dnsCfgBytes == nil || len(dnsCfgBytes) < 1 {
log.Print("nameserver's configuration is empty, any in-memory records will be unset")
n.mu.Lock()
n.ip4 = make(map[dnsname.FQDN][]net.IP)
n.mu.Unlock()
return nil
}
dnsCfg := &operatorutils.Records{}
err = json.Unmarshal(dnsCfgBytes, dnsCfg)
if err != nil {
return fmt.Errorf("error unmarshalling nameserver configuration: %v\n", err)
}
if dnsCfg.Version != operatorutils.Alpha1Version {
return fmt.Errorf("unsupported configuration version %s, supported versions are %s\n", dnsCfg.Version, operatorutils.Alpha1Version)
}
ip4 := make(map[dnsname.FQDN][]net.IP)
defer func() {
n.mu.Lock()
defer n.mu.Unlock()
n.ip4 = ip4
}()
if len(dnsCfg.IP4) == 0 {
log.Print("nameserver's configuration contains no records, any in-memory records will be unset")
return nil
}
for fqdn, ips := range dnsCfg.IP4 {
fqdn, err := dnsname.ToFQDN(fqdn)
if err != nil {
log.Printf("invalid nameserver's configuration: %s is not a valid FQDN: %v; skipping this record", fqdn, err)
continue // one invalid hostname should not break the whole nameserver
}
for _, ipS := range ips {
ip := net.ParseIP(ipS).To4()
if ip == nil { // To4 returns nil if IP is not a IPv4 address
log.Printf("invalid nameserver's configuration: %v does not appear to be an IPv4 address; skipping this record", ipS)
continue // one invalid IP address should not break the whole nameserver
}
ip4[fqdn] = []net.IP{ip}
}
}
return nil
}
// listenAndServe starts a DNS server for the provided network and address.
func listenAndServe(net, addr string, shutdown chan os.Signal) {
s := &dns.Server{Addr: addr, Net: net}
go func() {
<-shutdown
log.Printf("shutting down server for %s", net)
s.Shutdown()
}()
log.Printf("listening for %s queries on %s", net, addr)
if err := s.ListenAndServe(); err != nil {
log.Fatalf("error running %s server: %v", net, err)
}
}
// ensureWatcherForKubeConfigMap sets up a new file watcher for the ConfigMap
// that's expected to be mounted at /config. Returns a channel that receives an
// event every time the contents get updated.
func ensureWatcherForKubeConfigMap(ctx context.Context) chan string {
c := make(chan string)
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatalf("error creating a new watcher for the mounted ConfigMap: %v", err)
}
// kubelet mounts configmap to a Pod using a series of symlinks, one of
// which is <mount-dir>/..data that Kubernetes recommends consumers to
// use if they need to monitor changes
// https://github.com/kubernetes/kubernetes/blob/v1.28.1/pkg/volume/util/atomic_writer.go#L39-L61
toWatch := filepath.Join(defaultDNSConfigDir, kubeletMountedConfigLn)
go func() {
defer watcher.Close()
log.Printf("starting file watch for %s", defaultDNSConfigDir)
for {
select {
case <-ctx.Done():
log.Print("context cancelled, exiting ConfigMap watcher")
return
case event, ok := <-watcher.Events:
if !ok {
log.Fatal("watcher finished; exiting")
}
if event.Name == toWatch {
msg := fmt.Sprintf("ConfigMap update received: %s", event)
log.Print(msg)
c <- msg
}
case err, ok := <-watcher.Errors:
if err != nil {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] error watching configuration: %v", err)
}
if !ok {
// TODO (irbekrm): this runs in a
// container that will be thrown away,
// so this should be ok. But maybe still
// need to ensure that the DNS server
// terminates connections more
// gracefully.
log.Fatalf("[unexpected] errors watcher exited")
}
}
}
}()
if err = watcher.Add(defaultDNSConfigDir); err != nil {
log.Fatalf("failed setting up a watcher for the mounted ConfigMap: %v", err)
}
return c
}
// configReaderFunc is a function that returns the desired nameserver configuration.
type configReaderFunc func() ([]byte, error)
// configMapConfigReader reads the desired nameserver configuration from a
// records.json file in a ConfigMap mounted at /config.
var configMapConfigReader configReaderFunc = func() ([]byte, error) {
if contents, err := os.ReadFile(filepath.Join(defaultDNSConfigDir, operatorutils.DNSRecordsCMKey)); err == nil {
return contents, nil
} else if os.IsNotExist(err) {
return nil, nil
} else {
return nil, err
}
}
// lookupIP4 returns any IPv4 addresses for the given FQDN from nameserver's
// in-memory records.
func (n *nameserver) lookupIP4(fqdn dnsname.FQDN) []net.IP {
if n.ip4 == nil {
return nil
}
n.mu.Lock()
defer n.mu.Unlock()
f := n.ip4[fqdn]
return f
}

@ -0,0 +1,227 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"net"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/miekg/dns"
"tailscale.com/util/dnsname"
)
func TestNameserver(t *testing.T) {
tests := []struct {
name string
ip4 map[dnsname.FQDN][]net.IP
query *dns.Msg
wantResp *dns.Msg
}{
{
name: "A record query, record exists",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1, RecursionDesired: true},
},
wantResp: &dns.Msg{
Answer: []dns.RR{&dns.A{Hdr: dns.RR_Header{
Name: "foo.bar.com", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0},
A: net.IP{1, 2, 3, 4}}},
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeSuccess,
RecursionAvailable: false,
RecursionDesired: true,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, record does not exist",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "baz.bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNameError,
RecursionAvailable: false,
Response: true,
Opcode: dns.OpcodeQuery,
Authoritative: true,
}},
},
{
name: "A record query, but the name is not a valid FQDN",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo..bar.com", Qtype: dns.TypeA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeFormatError,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "AAAA record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeAAAA}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
{
name: "CNAME record query",
ip4: map[dnsname.FQDN][]net.IP{dnsname.FQDN("foo.bar.com."): {{1, 2, 3, 4}}},
query: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{Id: 1},
},
wantResp: &dns.Msg{
Question: []dns.Question{{Name: "foo.bar.com", Qtype: dns.TypeCNAME}},
MsgHdr: dns.MsgHdr{
Id: 1,
Rcode: dns.RcodeNotImplemented,
Response: true,
Opcode: dns.OpcodeQuery,
}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.ip4,
}
handler := ns.handleFunc()
fakeRespW := &fakeResponseWriter{}
handler(fakeRespW, tt.query)
if diff := cmp.Diff(*fakeRespW.msg, *tt.wantResp); diff != "" {
t.Fatalf("unexpected response (-got +want): \n%s", diff)
}
})
}
}
func TestResetRecords(t *testing.T) {
tests := []struct {
name string
config []byte
hasIp4 map[dnsname.FQDN][]net.IP
wantsIp4 map[dnsname.FQDN][]net.IP
wantsErr bool
}{
{
name: "previously empty nameserver.ip4 gets set",
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "nameserver.ip4 gets reset",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"foo.bar.com.": {{1, 2, 3, 4}}},
},
{
name: "configuration with incompatible version",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1beta1", "ip4": {"foo.bar.com": ["1.2.3.4"]}}`),
wantsIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsErr: true,
},
{
name: "nameserver.ip4 gets reset to empty config when no configuration is provided",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
{
name: "nameserver.ip4 gets reset to empty config when the provided configuration is empty",
hasIp4: map[dnsname.FQDN][]net.IP{"baz.bar.com.": {{1, 1, 3, 3}}},
config: []byte(`{"version": "v1alpha1", "ip4": {}}`),
wantsIp4: make(map[dnsname.FQDN][]net.IP),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ns := &nameserver{
ip4: tt.hasIp4,
configReader: func() ([]byte, error) { return tt.config, nil },
}
if err := ns.resetRecords(); err == nil == tt.wantsErr {
t.Errorf("resetRecords() returned err: %v, wantsErr: %v", err, tt.wantsErr)
}
if diff := cmp.Diff(ns.ip4, tt.wantsIp4); diff != "" {
t.Fatalf("unexpected nameserver.ip4 contents (-got +want): \n%s", diff)
}
})
}
}
// fakeResponseWriter is a faked out dns.ResponseWriter that can be used in
// tests that need to read the response message that was written.
type fakeResponseWriter struct {
msg *dns.Msg
}
var _ dns.ResponseWriter = &fakeResponseWriter{}
func (fr *fakeResponseWriter) WriteMsg(msg *dns.Msg) error {
fr.msg = msg
return nil
}
func (fr *fakeResponseWriter) LocalAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) RemoteAddr() net.Addr {
return nil
}
func (fr *fakeResponseWriter) Write([]byte) (int, error) {
return 0, nil
}
func (fr *fakeResponseWriter) Close() error {
return nil
}
func (fr *fakeResponseWriter) TsigStatus() error {
return nil
}
func (fr *fakeResponseWriter) TsigTimersOnly(bool) {}
func (fr *fakeResponseWriter) Hijack() {}

@ -73,38 +73,34 @@ func TestConnector(t *testing.T) {
hostname: "test-connector",
isExitNode: true,
subnetRoutes: "10.40.0.0/14",
confFileHash: "9321660203effb80983eaecc7b5ac5a8c53934926f46e895b9fe295dcfc5a904",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Add another route to be advertised.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter.AdvertiseRoutes = []tsapi.Route{"10.40.0.0/14", "10.44.0.0/20"}
})
opts.subnetRoutes = "10.40.0.0/14,10.44.0.0/20"
opts.confFileHash = "fb6c4daf67425f983985750cd8d6f2beae77e614fcb34176604571f5623d6862"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Remove a route.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter.AdvertiseRoutes = []tsapi.Route{"10.44.0.0/20"}
})
opts.subnetRoutes = "10.44.0.0/20"
opts.confFileHash = "bacba177bcfe3849065cf6fee53d658a9bb4144197ac5b861727d69ea99742bb"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Remove the subnet router.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.SubnetRouter = nil
})
opts.subnetRoutes = ""
opts.confFileHash = "7c421a99128eb80e79a285a82702f19f8f720615542a15bd794858a6275d8079"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Re-add the subnet router.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
@ -113,9 +109,8 @@ func TestConnector(t *testing.T) {
}
})
opts.subnetRoutes = "10.44.0.0/20"
opts.confFileHash = "bacba177bcfe3849065cf6fee53d658a9bb4144197ac5b861727d69ea99742bb"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Delete the Connector.
if err = fc.Delete(context.Background(), cn); err != nil {
@ -156,19 +151,17 @@ func TestConnector(t *testing.T) {
parentType: "connector",
subnetRoutes: "10.40.0.0/14",
hostname: "test-connector",
confFileHash: "57d922331890c9b1c8c6ae664394cb254334c551d9cd9db14537b5d9da9fb17e",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Add an exit node.
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.ExitNode = true
})
opts.isExitNode = true
opts.confFileHash = "1499b591fd97a50f0330db6ec09979792c49890cf31f5da5bb6a3f50dba1e77a"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Delete the Connector.
if err = fc.Delete(context.Background(), cn); err != nil {
@ -243,10 +236,9 @@ func TestConnectorWithProxyClass(t *testing.T) {
hostname: "test-connector",
isExitNode: true,
subnetRoutes: "10.40.0.0/14",
confFileHash: "9321660203effb80983eaecc7b5ac5a8c53934926f46e895b9fe295dcfc5a904",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. Update Connector to specify a ProxyClass. ProxyClass is not yet
// ready, so its configuration is NOT applied to the Connector
@ -255,7 +247,7 @@ func TestConnectorWithProxyClass(t *testing.T) {
conn.Spec.ProxyClass = "custom-metadata"
})
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready by proxy-class reconciler. Connector
// get reconciled and configuration from the ProxyClass is applied to
@ -269,12 +261,8 @@ func TestConnectorWithProxyClass(t *testing.T) {
}}}
})
opts.proxyClass = pc.Name
// We lose the auth key on second reconcile, because in code it's set to
// StringData, but is actually read from Data. This works with a real
// API server, but not with our test setup here.
opts.confFileHash = "1499b591fd97a50f0330db6ec09979792c49890cf31f5da5bb6a3f50dba1e77a"
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. Connector.spec.proxyClass field is unset, Connector gets
// reconciled and configuration from the ProxyClass is removed from the
@ -284,5 +272,5 @@ func TestConnectorWithProxyClass(t *testing.T) {
})
opts.proxyClass = ""
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}

@ -21,6 +21,9 @@ spec:
{{- end }}
labels:
app: operator
{{- with .Values.operatorConfig.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:

@ -24,6 +24,9 @@ rules:
- apiGroups: ["tailscale.com"]
resources: ["connectors", "connectors/status", "proxyclasses", "proxyclasses/status"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["tailscale.com"]
resources: ["dnsconfigs", "dnsconfigs/status"]
verbs: ["get", "list", "watch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@ -45,11 +48,14 @@ metadata:
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
resources: ["secrets", "serviceaccounts", "configmaps"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
resources: ["statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

@ -12,7 +12,7 @@ oauth: {}
# of chart installation. We do not use Helm's CRD installation mechanism as that
# does not allow for upgrading CRDs.
# https://helm.sh/docs/chart_best_practices/custom_resource_definitions/
installCRDs: "true"
installCRDs: true
operatorConfig:
# ACL tag that operator will be tagged with. Operator must be made owner of
@ -37,6 +37,7 @@ operatorConfig:
resources: {}
podAnnotations: {}
podLabels: {}
tolerations: []

@ -31,6 +31,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: 'Connector defines a Tailscale node that will be deployed in the cluster. The node can be configured to act as a Tailscale subnet router and/or a Tailscale exit node. Connector is a cluster-scoped resource. More info: https://tailscale.com/kb/1236/kubernetes-operator#deploying-exit-nodes-and-subnet-routers-on-kubernetes-using-connector-custom-resource'
type: object
required:
- spec
@ -44,7 +45,7 @@ spec:
metadata:
type: object
spec:
description: ConnectorSpec describes the desired Tailscale component.
description: 'ConnectorSpec describes the desired Tailscale component. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'
type: object
properties:
exitNode:

@ -0,0 +1,96 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: dnsconfigs.tailscale.com
spec:
group: tailscale.com
names:
kind: DNSConfig
listKind: DNSConfigList
plural: dnsconfigs
shortNames:
- dc
singular: dnsconfig
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Service IP address of the nameserver
jsonPath: .status.nameserver.ip
name: NameserverIP
type: string
name: v1alpha1
schema:
openAPIV3Schema:
type: object
required:
- spec
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
type: object
required:
- nameserver
properties:
nameserver:
type: object
properties:
image:
type: object
properties:
repo:
type: string
tag:
type: string
status:
type: object
properties:
conditions:
type: array
items:
description: ConnectorCondition contains condition information for a Connector.
type: object
required:
- status
- type
properties:
lastTransitionTime:
description: LastTransitionTime is the timestamp corresponding to the last status change of this condition.
type: string
format: date-time
message:
description: Message is a human readable description of the details of the last transition, complementing reason.
type: string
observedGeneration:
description: If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.
type: integer
format: int64
reason:
description: Reason is a brief machine readable explanation for the condition's last transition.
type: string
status:
description: Status of the condition, one of ('True', 'False', 'Unknown').
type: string
type:
description: Type of the condition, known values are (`SubnetRouterReady`).
type: string
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
nameserver:
type: object
properties:
ip:
type: string
served: true
storage: true
subresources:
status: {}

@ -21,6 +21,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: 'ProxyClass describes a set of configuration parameters that can be applied to proxy resources created by the Tailscale Kubernetes operator. To apply a given ProxyClass to resources created for a tailscale Ingress or Service, use tailscale.com/proxy-class=<proxyclass-name> label. To apply a given ProxyClass to resources created for a Connector, use connector.spec.proxyClass field. ProxyClass is a cluster scoped resource. More info: https://tailscale.com/kb/1236/kubernetes-operator#cluster-resource-customization-using-proxyclass-custom-resource.'
type: object
required:
- spec
@ -34,12 +35,20 @@ spec:
metadata:
type: object
spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
type: object
required:
- statefulSet
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.
type: object
required:
- enable
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
statefulSet:
description: Proxy's StatefulSet spec.
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
type: object
properties:
annotations:
@ -56,6 +65,526 @@ spec:
description: Configuration for the proxy Pod.
type: object
properties:
affinity:
description: Proxy Pod's affinity rules. By default, the Tailscale Kubernetes operator does not apply any affinity rules. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#affinity
type: object
properties:
nodeAffinity:
description: Describes node affinity scheduling rules for the pod.
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
type: array
items:
description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
type: object
required:
- preference
- weight
properties:
preference:
description: A node selector term, associated with the corresponding weight.
type: object
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchFields:
description: A list of node selector requirements by node's fields.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
type: object
required:
- nodeSelectorTerms
properties:
nodeSelectorTerms:
description: Required. A list of node selector terms. The terms are ORed.
type: array
items:
description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
type: object
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchFields:
description: A list of node selector requirements by node's fields.
type: array
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
type: array
items:
type: string
x-kubernetes-map-type: atomic
x-kubernetes-map-type: atomic
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
type: array
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
type: object
required:
- podAffinityTerm
- weight
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
type: array
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
podAntiAffinity:
description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
type: object
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
type: array
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
type: object
required:
- podAffinityTerm
- weight
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
type: integer
format: int32
requiredDuringSchedulingIgnoredDuringExecution:
description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
type: array
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
type: object
required:
- topologyKey
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
type: array
items:
type: string
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
type: array
items:
type: string
matchLabels:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
type: array
items:
type: string
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
annotations:
description: Annotations that will be added to the proxy Pod. Any annotations specified here will be merged with the default annotations applied to the Pod by the Tailscale Kubernetes operator. Annotations must be valid Kubernetes annotations. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set
type: object
@ -177,6 +706,21 @@ spec:
description: Configuration for the proxy container running tailscale.
type: object
properties:
env:
description: List of environment variables to set in the container. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables Note that environment variables provided here will take precedence over Tailscale-specific environment variables set by the operator, however running proxies with custom values for Tailscale environment variables (i.e TS_USERSPACE) is not recommended and might break in the future.
type: array
items:
type: object
required:
- name
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
type: string
pattern: ^[-._a-zA-Z][-._a-zA-Z0-9]*$
value:
description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".'
type: string
resources:
description: Container resource requirements. By default Tailscale Kubernetes operator does not apply any resource requirements. The amount of resources required wil depend on the amount of resources the operator needs to parse, usage patterns and cluster size. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources
type: object
@ -305,6 +849,21 @@ spec:
description: Configuration for the proxy init container that enables forwarding.
type: object
properties:
env:
description: List of environment variables to set in the container. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables Note that environment variables provided here will take precedence over Tailscale-specific environment variables set by the operator, however running proxies with custom values for Tailscale environment variables (i.e TS_USERSPACE) is not recommended and might break in the future.
type: array
items:
type: object
required:
- name
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
type: string
pattern: ^[-._a-zA-Z][-._a-zA-Z0-9]*$
value:
description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".'
type: string
resources:
description: Container resource requirements. By default Tailscale Kubernetes operator does not apply any resource requirements. The amount of resources required wil depend on the amount of resources the operator needs to parse, usage patterns and cluster size. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources
type: object
@ -453,6 +1012,7 @@ spec:
description: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
type: string
status:
description: Status of the ProxyClass. This is set and managed automatically. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
type: object
properties:
conditions:

@ -0,0 +1,9 @@
apiVersion: tailscale.com/v1alpha1
kind: DNSConfig
metadata:
name: ts-dns
spec:
nameserver:
image:
repo: tailscale/k8s-nameserver
tag: unstable-v1.65

@ -3,13 +3,15 @@ kind: ProxyClass
metadata:
name: prod
spec:
metrics:
enable: true
statefulSet:
annotations:
platform-component: infra
platform-component: infra
pod:
labels:
team: eng
nodeSelector:
beta.kubernetes.io/os: "linux"
kubernetes.io/os: "linux"
imagePullSecrets:
- name: "foo"

@ -0,0 +1,4 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: dnsrecords

@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nameserver
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
app: nameserver
strategy:
type: Recreate
template:
metadata:
labels:
app: nameserver
spec:
containers:
- imagePullPolicy: IfNotPresent
name: nameserver
ports:
- name: tcp
protocol: TCP
containerPort: 1053
- name: udp
protocol: UDP
containerPort: 1053
volumeMounts:
- name: dnsrecords
mountPath: /config
restartPolicy: Always
serviceAccount: nameserver
serviceAccountName: nameserver
volumes:
- name: dnsrecords
configMap:
name: dnsrecords

@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nameserver

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: nameserver
spec:
selector:
app: nameserver
ports:
- name: udp
targetPort: 1053
port: 53
protocol: UDP
- name: tcp
targetPort: 1053
port: 53
protocol: TCP

@ -60,6 +60,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: 'Connector defines a Tailscale node that will be deployed in the cluster. The node can be configured to act as a Tailscale subnet router and/or a Tailscale exit node. Connector is a cluster-scoped resource. More info: https://tailscale.com/kb/1236/kubernetes-operator#deploying-exit-nodes-and-subnet-routers-on-kubernetes-using-connector-custom-resource'
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@ -70,7 +71,7 @@ spec:
metadata:
type: object
spec:
description: ConnectorSpec describes the desired Tailscale component.
description: 'ConnectorSpec describes the desired Tailscale component. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'
properties:
exitNode:
description: ExitNode defines whether the Connector node should act as a Tailscale exit node. Defaults to false. https://tailscale.com/kb/1103/exit-nodes
@ -158,6 +159,103 @@ spec:
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: dnsconfigs.tailscale.com
spec:
group: tailscale.com
names:
kind: DNSConfig
listKind: DNSConfigList
plural: dnsconfigs
shortNames:
- dc
singular: dnsconfig
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Service IP address of the nameserver
jsonPath: .status.nameserver.ip
name: NameserverIP
type: string
name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
nameserver:
properties:
image:
properties:
repo:
type: string
tag:
type: string
type: object
type: object
required:
- nameserver
type: object
status:
properties:
conditions:
items:
description: ConnectorCondition contains condition information for a Connector.
properties:
lastTransitionTime:
description: LastTransitionTime is the timestamp corresponding to the last status change of this condition.
format: date-time
type: string
message:
description: Message is a human readable description of the details of the last transition, complementing reason.
type: string
observedGeneration:
description: If set, this represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.condition[x].observedGeneration is 9, the condition is out of date with respect to the current state of the Connector.
format: int64
type: integer
reason:
description: Reason is a brief machine readable explanation for the condition's last transition.
type: string
status:
description: Status of the condition, one of ('True', 'False', 'Unknown').
type: string
type:
description: Type of the condition, known values are (`SubnetRouterReady`).
type: string
required:
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
nameserver:
properties:
ip:
type: string
type: object
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
@ -179,6 +277,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: 'ProxyClass describes a set of configuration parameters that can be applied to proxy resources created by the Tailscale Kubernetes operator. To apply a given ProxyClass to resources created for a tailscale Ingress or Service, use tailscale.com/proxy-class=<proxyclass-name> label. To apply a given ProxyClass to resources created for a Connector, use connector.spec.proxyClass field. ProxyClass is a cluster scoped resource. More info: https://tailscale.com/kb/1236/kubernetes-operator#cluster-resource-customization-using-proxyclass-custom-resource.'
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@ -189,9 +288,19 @@ spec:
metadata:
type: object
spec:
description: Specification of the desired state of the ProxyClass resource. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties:
metrics:
description: Configuration for proxy metrics. Metrics are currently not supported for egress proxies and for Ingress proxies that have been configured with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation. Note that the metrics are currently considered unstable and will likely change in breaking ways in the future - we only recommend that you use those for debugging purposes.
properties:
enable:
description: Setting enable to true will make the proxy serve Tailscale metrics at <pod-ip>:9001/debug/metrics. Defaults to false.
type: boolean
required:
- enable
type: object
statefulSet:
description: Proxy's StatefulSet spec.
description: Configuration parameters for the proxy's StatefulSet. Tailscale Kubernetes operator deploys a StatefulSet for each of the user configured proxies (Tailscale Ingress, Tailscale Service, Connector).
properties:
annotations:
additionalProperties:
@ -206,6 +315,526 @@ spec:
pod:
description: Configuration for the proxy Pod.
properties:
affinity:
description: Proxy Pod's affinity rules. By default, the Tailscale Kubernetes operator does not apply any affinity rules. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#affinity
properties:
nodeAffinity:
description: Describes node affinity scheduling rules for the pod.
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
items:
description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
properties:
preference:
description: A node selector term, associated with the corresponding weight.
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchFields:
description: A list of node selector requirements by node's fields.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
type: integer
required:
- preference
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
properties:
nodeSelectorTerms:
description: Required. A list of node selector terms. The terms are ORed.
items:
description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchFields:
description: A list of node selector requirements by node's fields.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
format: int32
type: integer
required:
- podAffinityTerm
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
type: array
type: object
podAntiAffinity:
description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
properties:
preferredDuringSchedulingIgnoredDuringExecution:
description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
properties:
podAffinityTerm:
description: Required. A pod affinity term, associated with the corresponding weight.
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
weight:
description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
format: int32
type: integer
required:
- podAffinityTerm
- weight
type: object
type: array
requiredDuringSchedulingIgnoredDuringExecution:
description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
items:
description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running
properties:
labelSelector:
description: A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
mismatchLabelKeys:
description: MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `LabelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
items:
type: string
type: array
x-kubernetes-list-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
type: string
type: array
topologyKey:
description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
type: string
required:
- topologyKey
type: object
type: array
type: object
type: object
annotations:
additionalProperties:
type: string
@ -326,6 +955,21 @@ spec:
tailscaleContainer:
description: Configuration for the proxy container running tailscale.
properties:
env:
description: List of environment variables to set in the container. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables Note that environment variables provided here will take precedence over Tailscale-specific environment variables set by the operator, however running proxies with custom values for Tailscale environment variables (i.e TS_USERSPACE) is not recommended and might break in the future.
items:
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
pattern: ^[-._a-zA-Z][-._a-zA-Z0-9]*$
type: string
value:
description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".'
type: string
required:
- name
type: object
type: array
resources:
description: Container resource requirements. By default Tailscale Kubernetes operator does not apply any resource requirements. The amount of resources required wil depend on the amount of resources the operator needs to parse, usage patterns and cluster size. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources
properties:
@ -454,6 +1098,21 @@ spec:
tailscaleInitContainer:
description: Configuration for the proxy init container that enables forwarding.
properties:
env:
description: List of environment variables to set in the container. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables Note that environment variables provided here will take precedence over Tailscale-specific environment variables set by the operator, however running proxies with custom values for Tailscale environment variables (i.e TS_USERSPACE) is not recommended and might break in the future.
items:
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
pattern: ^[-._a-zA-Z][-._a-zA-Z0-9]*$
type: string
value:
description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".'
type: string
required:
- name
type: object
type: array
resources:
description: Container resource requirements. By default Tailscale Kubernetes operator does not apply any resource requirements. The amount of resources required wil depend on the amount of resources the operator needs to parse, usage patterns and cluster size. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources
properties:
@ -604,10 +1263,9 @@ spec:
type: array
type: object
type: object
required:
- statefulSet
type: object
status:
description: Status of the ProxyClass. This is set and managed automatically. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties:
conditions:
description: List of status conditions to indicate the status of the ProxyClass. Known condition types are `ProxyClassReady`.
@ -691,6 +1349,16 @@ rules:
- list
- watch
- update
- apiGroups:
- tailscale.com
resources:
- dnsconfigs
- dnsconfigs/status
verbs:
- get
- list
- watch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@ -715,14 +1383,25 @@ rules:
- ""
resources:
- secrets
- serviceaccounts
- configmaps
verbs:
- '*'
- apiGroups:
- apps
resources:
- statefulsets
- deployments
verbs:
- '*'
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role

@ -14,10 +14,8 @@ spec:
- name: sysctler
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1
command: ["/bin/sh", "-c"]
args: [sysctl -w net.ipv4.ip_forward=1 && if sysctl net.ipv6.conf.all.forwarding; then sysctl -w net.ipv6.conf.all.forwarding=1; fi]
resources:
requests:
cpu: 1m

@ -20,3 +20,7 @@ spec:
env:
- name: TS_USERSPACE
value: "true"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP

@ -0,0 +1,337 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"context"
"encoding/json"
"fmt"
"slices"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/net"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/util/mak"
)
const (
dnsRecordsRecocilerFinalizer = "tailscale.com/dns-records-reconciler"
annotationTSMagicDNSName = "tailscale.com/magic-dnsname"
)
// dnsRecordsReconciler knows how to update dnsrecords ConfigMap with DNS
// records.
// The records that it creates are:
// - For tailscale Ingress, a mapping of the Ingress's MagicDNSName to the IP address of
// the ingress proxy Pod.
// - For egress proxies configured via tailscale.com/tailnet-fqdn annotation, a
// mapping of the tailnet FQDN to the IP address of the egress proxy Pod.
//
// Records will only be created if there is exactly one ready
// tailscale.com/v1alpha1.DNSConfig instance in the cluster (so that we know
// that there is a ts.net nameserver deployed in the cluster).
type dnsRecordsReconciler struct {
client.Client
tsNamespace string // namespace in which we provision tailscale resources
logger *zap.SugaredLogger
isDefaultLoadBalancer bool // true if operator is the default ingress controller in this cluster
}
// Reconcile takes a reconcile.Request for a headless Service fronting a
// tailscale proxy and updates DNS Records in dnsrecords ConfigMap for the
// in-cluster ts.net nameserver if required.
func (dnsRR *dnsRecordsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := dnsRR.logger.With("Service", req.NamespacedName)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
headlessSvc := new(corev1.Service)
err = dnsRR.Client.Get(ctx, req.NamespacedName, headlessSvc)
if apierrors.IsNotFound(err) {
logger.Debugf("Service not found")
return reconcile.Result{}, nil
}
if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get Service: %w", err)
}
if !(isManagedByType(headlessSvc, "svc") || isManagedByType(headlessSvc, "ingress")) {
logger.Debugf("Service is not a headless Service for a tailscale ingress or egress proxy; do nothing")
return reconcile.Result{}, nil
}
if !headlessSvc.DeletionTimestamp.IsZero() {
logger.Debug("Service is being deleted, clean up resources")
return reconcile.Result{}, dnsRR.maybeCleanup(ctx, headlessSvc, logger)
}
// Check that there is a ts.net nameserver deployed to the cluster by
// checking that there is tailscale.com/v1alpha1.DNSConfig resource in a
// Ready state.
dnsCfgLst := new(tsapi.DNSConfigList)
if err = dnsRR.List(ctx, dnsCfgLst); err != nil {
return reconcile.Result{}, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgLst.Items) == 0 {
logger.Debugf("DNSConfig does not exist, not creating DNS records")
return reconcile.Result{}, nil
}
if len(dnsCfgLst.Items) > 1 {
logger.Errorf("Invalid cluster state - more than one DNSConfig found in cluster. Please ensure no more than one exists")
return reconcile.Result{}, nil
}
dnsCfg := dnsCfgLst.Items[0]
if !operatorutils.DNSCfgIsReady(&dnsCfg) {
logger.Info("DNSConfig is not ready yet, waiting...")
return reconcile.Result{}, nil
}
return reconcile.Result{}, dnsRR.maybeProvision(ctx, headlessSvc, logger)
}
// maybeProvision ensures that dnsrecords ConfigMap contains a record for the
// proxy associated with the headless Service.
// The record is only provisioned if the proxy is for a tailscale Ingress or
// egress configured via tailscale.com/tailnet-fqdn annotation.
//
// For Ingress, the record is a mapping between the MagicDNSName of the Ingress, retrieved from
// ingress.status.loadBalancer.ingress.hostname field and the proxy Pod IP addresses
// retrieved from the EndpoinSlice associated with this headless Service, i.e
// Records{IP4: <MagicDNS name of the Ingress>: <[IPs of the ingress proxy Pods]>}
//
// For egress, the record is a mapping between tailscale.com/tailnet-fqdn
// annotation and the proxy Pod IP addresses, retrieved from the EndpointSlice
// associated with this headless Service, i.e
// Records{IP4: {<tailscale.com/tailnet-fqdn>: <[IPs of the egress proxy Pods]>}
//
// If records need to be created for this proxy, maybeProvision will also:
// - update the headless Service with a tailscale.com/magic-dnsname annotation
// - update the headless Service with a finalizer
func (dnsRR *dnsRecordsReconciler) maybeProvision(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
if headlessSvc == nil {
logger.Info("[unexpected] maybeProvision called with a nil Service")
return nil
}
isEgressFQDNSvc, err := dnsRR.isSvcForFQDNEgressProxy(ctx, headlessSvc)
if err != nil {
return fmt.Errorf("error checking whether the Service is for an egress proxy: %w", err)
}
if !(isEgressFQDNSvc || isManagedByType(headlessSvc, "ingress")) {
logger.Debug("Service is not fronting a proxy that we create DNS records for; do nothing")
return nil
}
fqdn, err := dnsRR.fqdnForDNSRecord(ctx, headlessSvc, logger)
if err != nil {
return fmt.Errorf("error determining DNS name for record: %w", err)
}
if fqdn == "" {
logger.Debugf("MagicDNS name does not (yet) exist, not provisioning DNS record")
return nil // a new reconcile will be triggered once it's added
}
oldHeadlessSvc := headlessSvc.DeepCopy()
// Ensure that headless Service is annotated with a finalizer to help
// with records cleanup when proxy resources are deleted.
if !slices.Contains(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer) {
headlessSvc.Finalizers = append(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
}
// Ensure that headless Service is annotated with the current MagicDNS
// name to help with records cleanup when proxy resources are deleted or
// MagicDNS name changes.
oldFqdn := headlessSvc.Annotations[annotationTSMagicDNSName]
if oldFqdn != "" && oldFqdn != fqdn { // i.e user has changed the value of tailscale.com/tailnet-fqdn annotation
logger.Debugf("MagicDNS name has changed, remvoving record for %s", oldFqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, oldFqdn)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error removing record for %s: %w", oldFqdn, err)
}
}
mak.Set(&headlessSvc.Annotations, annotationTSMagicDNSName, fqdn)
if !apiequality.Semantic.DeepEqual(oldHeadlessSvc, headlessSvc) {
logger.Infof("provisioning DNS record for MagicDNS name: %s", fqdn) // this will be printed exactly once
if err := dnsRR.Update(ctx, headlessSvc); err != nil {
return fmt.Errorf("error updating proxy headless Service metadata: %w", err)
}
}
// Get the Pod IP addresses for the proxy from the EndpointSlice for the
// headless Service.
labels := map[string]string{discoveryv1.LabelServiceName: headlessSvc.Name} // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
eps, err := getSingleObject[discoveryv1.EndpointSlice](ctx, dnsRR.Client, dnsRR.tsNamespace, labels)
if err != nil {
return fmt.Errorf("error getting the EndpointSlice for the proxy's headless Service: %w", err)
}
if eps == nil {
logger.Debugf("proxy's headless Service EndpointSlice does not yet exist. We will reconcile again once it's created")
return nil
}
// An EndpointSlice for a Service can have a list of endpoints that each
// can have multiple addresses - these are the IP addresses of any Pods
// selected by that Service. Pick all the IPv4 addresses.
ips := make([]string, 0)
for _, ep := range eps.Endpoints {
for _, ip := range ep.Addresses {
if !net.IsIPv4String(ip) {
logger.Infof("EndpointSlice contains IP address %q that is not IPv4, ignoring. Currently only IPv4 is supported", ip)
} else {
ips = append(ips, ip)
}
}
}
if len(ips) == 0 {
logger.Debugf("EndpointSlice for the Service contains no IPv4 addresses. We will reconcile again once they are created.")
return nil
}
updateFunc := func(rec *operatorutils.Records) {
mak.Set(&rec.IP4, fqdn, ips)
}
if err = dnsRR.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS records: %w", err)
}
return nil
}
// maybeCleanup ensures that the DNS record for the proxy has been removed from
// dnsrecords ConfigMap and the tailscale.com/dns-records-reconciler finalizer
// has been removed from the Service. If the record is not found in the
// ConfigMap, the ConfigMap does not exist, or the Service does not have
// tailscale.com/magic-dnsname annotation, just remove the finalizer.
func (h *dnsRecordsReconciler) maybeCleanup(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) error {
ix := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if ix == -1 {
logger.Debugf("no finalizer, nothing to do")
return nil
}
cm := &corev1.ConfigMap{}
err := h.Client.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: h.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
logger.Debug("'dsnrecords' ConfigMap not found")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
if err != nil {
return fmt.Errorf("error retrieving 'dnsrecords' ConfigMap: %w", err)
}
if cm.Data == nil {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
_, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
logger.Debug("'dnsrecords' ConfigMap contains no records")
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
fqdn, _ := headlessSvc.GetAnnotations()[annotationTSMagicDNSName]
if fqdn == "" {
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
logger.Infof("removing DNS record for MagicDNS name %s", fqdn)
updateFunc := func(rec *operatorutils.Records) {
delete(rec.IP4, fqdn)
}
if err = h.updateDNSConfig(ctx, updateFunc); err != nil {
return fmt.Errorf("error updating DNS config: %w", err)
}
return h.removeHeadlessSvcFinalizer(ctx, headlessSvc)
}
func (dnsRR *dnsRecordsReconciler) removeHeadlessSvcFinalizer(ctx context.Context, headlessSvc *corev1.Service) error {
idx := slices.Index(headlessSvc.Finalizers, dnsRecordsRecocilerFinalizer)
if idx == -1 {
return nil
}
headlessSvc.Finalizers = append(headlessSvc.Finalizers[:idx], headlessSvc.Finalizers[idx+1:]...)
return dnsRR.Update(ctx, headlessSvc)
}
// fqdnForDNSRecord returns MagicDNS name associated with a given headless Service.
// If the headless Service is for a tailscale Ingress proxy, returns ingress.status.loadBalancer.ingress.hostname.
// If the headless Service is for an tailscale egress proxy configured via tailscale.com/tailnet-fqdn annotation, returns the annotation value.
// This function is not expected to be called with headless Services for other
// proxy types, or any other Services, but it just returns an empty string if
// that happens.
func (dnsRR *dnsRecordsReconciler) fqdnForDNSRecord(ctx context.Context, headlessSvc *corev1.Service, logger *zap.SugaredLogger) (string, error) {
parentName := parentFromObjectLabels(headlessSvc)
if isManagedByType(headlessSvc, "ingress") {
ing := new(networkingv1.Ingress)
if err := dnsRR.Get(ctx, parentName, ing); err != nil {
return "", err
}
if len(ing.Status.LoadBalancer.Ingress) == 0 {
return "", nil
}
return ing.Status.LoadBalancer.Ingress[0].Hostname, nil
}
if isManagedByType(headlessSvc, "svc") {
svc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, svc); apierrors.IsNotFound(err) {
logger.Info("[unexpected] parent Service for egress proxy %s not found", headlessSvc.Name)
return "", nil
} else if err != nil {
return "", err
}
return svc.Annotations[AnnotationTailnetTargetFQDN], nil
}
return "", nil
}
// updateDNSConfig runs the provided update function against dnsrecords
// ConfigMap. At this point the in-cluster ts.net nameserver is expected to be
// successfully created together with the ConfigMap.
func (dnsRR *dnsRecordsReconciler) updateDNSConfig(ctx context.Context, update func(*operatorutils.Records)) error {
cm := &corev1.ConfigMap{}
err := dnsRR.Get(ctx, types.NamespacedName{Name: operatorutils.DNSRecordsCMName, Namespace: dnsRR.tsNamespace}, cm)
if apierrors.IsNotFound(err) {
dnsRR.logger.Info("[unexpected] dnsrecords ConfigMap not found in cluster. Not updating DNS records. Please open an isue and attach operator logs.")
return nil
}
if err != nil {
return fmt.Errorf("error retrieving dnsrecords ConfigMap: %w", err)
}
dnsRecords := operatorutils.Records{Version: operatorutils.Alpha1Version, IP4: map[string][]string{}}
if cm.Data != nil && cm.Data[operatorutils.DNSRecordsCMKey] != "" {
if err := json.Unmarshal([]byte(cm.Data[operatorutils.DNSRecordsCMKey]), &dnsRecords); err != nil {
return err
}
}
update(&dnsRecords)
dnsRecordsBs, err := json.Marshal(dnsRecords)
if err != nil {
return fmt.Errorf("error marshalling DNS records: %w", err)
}
mak.Set(&cm.Data, operatorutils.DNSRecordsCMKey, string(dnsRecordsBs))
return dnsRR.Update(ctx, cm)
}
// isSvcForFQDNEgressProxy returns true if the Service is a headless Service
// created for a proxy for a tailscale egress Service configured via
// tailscale.com/tailnet-fqdn annotation.
func (dnsRR *dnsRecordsReconciler) isSvcForFQDNEgressProxy(ctx context.Context, svc *corev1.Service) (bool, error) {
if !isManagedByType(svc, "svc") {
return false, nil
}
parentName := parentFromObjectLabels(svc)
parentSvc := new(corev1.Service)
if err := dnsRR.Get(ctx, parentName, parentSvc); apierrors.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
}
annots := parentSvc.Annotations
return annots != nil && annots[AnnotationTailnetTargetFQDN] != "", nil
}

@ -0,0 +1,198 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
)
func TestDNSRecordsReconciler(t *testing.T) {
// Preconfigure a cluster with a DNSConfig
dnsConfig := &tsapi.DNSConfig{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig"},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{},
}}
ing := &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: "ts-ingress",
Namespace: "test",
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
},
Status: networkingv1.IngressStatus{
LoadBalancer: networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{{
Hostname: "cluster.ingress.ts.net"}},
},
},
}
cm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords", Namespace: "tailscale"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(cm).
WithObjects(dnsConfig).
WithObjects(ing).
WithStatusSubresource(dnsConfig, ing).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
// Set the ready condition of the DNSConfig
mustUpdateStatus[tsapi.DNSConfig](t, fc, "", "test", func(c *tsapi.DNSConfig) {
operatorutils.SetDNSConfigCondition(c, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated, 0, cl, zl.Sugar())
})
dnsRR := &dnsRecordsReconciler{
Client: fc,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
// 1. DNS record is created for an egress proxy configured via
// tailscale.com/tailnet-fqdn annotation
egressSvcFQDN := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "egress-fqdn",
Namespace: "test",
Annotations: map[string]string{"tailscale.com/tailnet-fqdn": "foo.bar.ts.net"},
},
Spec: corev1.ServiceSpec{
ExternalName: "unused",
Type: corev1.ServiceTypeExternalName,
},
}
headlessForEgressSvcFQDN := headlessSvcForParent(egressSvcFQDN, "svc") // create the proxy headless Service
ep := endpointSliceForService(headlessForEgressSvcFQDN, "10.9.8.7")
mustCreate(t, fc, egressSvcFQDN)
mustCreate(t, fc, headlessForEgressSvcFQDN)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
// ConfigMap should now have a record for foo.bar.ts.net -> 10.8.8.7
wantHosts := map[string][]string{"foo.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 2. DNS record is updated if tailscale.com/tailnet-fqdn annotation's
// value changes
mustUpdate(t, fc, "test", "egress-fqdn", func(svc *corev1.Service) {
svc.Annotations["tailscale.com/tailnet-fqdn"] = "baz.bar.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.9.8.7"}}
expectHostsRecords(t, fc, wantHosts)
// 3. DNS record is updated if the IP address of the proxy Pod changes.
ep = endpointSliceForService(headlessForEgressSvcFQDN, "10.6.5.4")
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"10.6.5.4"}
})
expectReconciled(t, dnsRR, "tailscale", "egress-fqdn") // dns-records-reconciler reconcile the headless Service
wantHosts = map[string][]string{"baz.bar.ts.net": {"10.6.5.4"}}
expectHostsRecords(t, fc, wantHosts)
// 4. DNS record is created for an ingress proxy configured via Ingress
headlessForIngress := headlessSvcForParent(ing, "ingress")
ep = endpointSliceForService(headlessForIngress, "10.9.8.7")
mustCreate(t, fc, headlessForIngress)
mustCreate(t, fc, ep)
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
wantHosts["cluster.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 5. DNS records are updated if Ingress's MagicDNS name changes (i.e users changed spec.tls.hosts[0])
t.Log("test case 5")
mustUpdateStatus(t, fc, "test", "ts-ingress", func(ing *networkingv1.Ingress) {
ing.Status.LoadBalancer.Ingress[0].Hostname = "another.ingress.ts.net"
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress") // dns-records-reconciler should reconcile the headless Service
delete(wantHosts, "cluster.ingress.ts.net")
wantHosts["another.ingress.ts.net"] = []string{"10.9.8.7"}
expectHostsRecords(t, fc, wantHosts)
// 6. DNS records are updated if Ingress proxy's Pod IP changes
mustUpdate(t, fc, ep.Namespace, ep.Name, func(ep *discoveryv1.EndpointSlice) {
ep.Endpoints[0].Addresses = []string{"7.8.9.10"}
})
expectReconciled(t, dnsRR, "tailscale", "ts-ingress")
wantHosts["another.ingress.ts.net"] = []string{"7.8.9.10"}
expectHostsRecords(t, fc, wantHosts)
}
func headlessSvcForParent(o client.Object, typ string) *corev1.Service {
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: o.GetName(),
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: o.GetName(),
LabelParentNamespace: o.GetNamespace(),
LabelParentType: typ,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "None",
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{"foo": "bar"},
},
}
}
func endpointSliceForService(svc *corev1.Service, ip string) *discoveryv1.EndpointSlice {
return &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: svc.Name,
Namespace: svc.Namespace,
Labels: map[string]string{discoveryv1.LabelServiceName: svc.Name},
},
Endpoints: []discoveryv1.Endpoint{{
Addresses: []string{ip},
}},
}
}
func expectHostsRecords(t *testing.T, cl client.Client, wantsHosts map[string][]string) {
t.Helper()
cm := new(corev1.ConfigMap)
if err := cl.Get(context.Background(), types.NamespacedName{Name: "dnsrecords", Namespace: "tailscale"}, cm); err != nil {
t.Fatalf("getting dnsconfig ConfigMap: %v", err)
}
if cm.Data == nil {
t.Fatal("dnsconfig ConfigMap has no data")
}
dnsConfigString, ok := cm.Data[operatorutils.DNSRecordsCMKey]
if !ok {
t.Fatal("dnsconfig ConfigMap does not contain dnsconfig")
}
dnsConfig := &operatorutils.Records{}
if err := json.Unmarshal([]byte(dnsConfigString), dnsConfig); err != nil {
t.Fatalf("unmarshaling dnsconfig: %v", err)
}
if diff := cmp.Diff(dnsConfig.IP4, wantsHosts); diff != "" {
t.Fatalf("unexpected dns config (-got +want):\n%s", diff)
}
}

@ -22,9 +22,11 @@ const (
operatorDeploymentFilesPath = "cmd/k8s-operator/deploy"
connectorCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_connectors.yaml"
proxyClassCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_proxyclasses.yaml"
dnsConfigCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_dnsconfigs.yaml"
helmTemplatesPath = operatorDeploymentFilesPath + "/chart/templates"
connectorCRDHelmTemplatePath = helmTemplatesPath + "/connector.yaml"
proxyClassCRDHelmTemplatePath = helmTemplatesPath + "/proxyclass.yaml"
dnsConfigCRDHelmTemplatePath = helmTemplatesPath + "/dnsconfig.yaml"
helmConditionalStart = "{{ if .Values.installCRDs -}}\n"
helmConditionalEnd = "{{- end -}}"
@ -36,10 +38,10 @@ func main() {
}
repoRoot := "../../"
switch os.Args[1] {
case "helmcrd": // insert CRD to Helm templates behind a installCRDs=true conditional check
log.Print("Adding Connector CRD to Helm templates")
case "helmcrd": // insert CRDs to Helm templates behind a installCRDs=true conditional check
log.Print("Adding CRDs to Helm templates")
if err := generate("./"); err != nil {
log.Fatalf("error adding Connector CRD to Helm templates: %v", err)
log.Fatalf("error adding CRDs to Helm templates: %v", err)
}
return
case "staticmanifests": // generate static manifests from Helm templates (including the CRD)
@ -108,7 +110,7 @@ func main() {
}
}
// generate places tailscale.com CRDs (currently Connector and ProxyClass) into
// generate places tailscale.com CRDs (currently Connector, ProxyClass and DNSConfig) into
// the Helm chart templates behind .Values.installCRDs=true condition (true by
// default).
func generate(baseDir string) error {
@ -140,6 +142,9 @@ func generate(baseDir string) error {
if err := addCRDToHelm(proxyClassCRDPath, proxyClassCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding ProxyClass CRD to Helm templates: %w", err)
}
if err := addCRDToHelm(dnsConfigCRDPath, dnsConfigCRDHelmTemplatePath); err != nil {
return fmt.Errorf("error adding DNSConfig CRD to Helm templates: %w", err)
}
return nil
}
@ -151,5 +156,8 @@ func cleanup(baseDir string) error {
if err := os.Remove(filepath.Join(baseDir, proxyClassCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up ProxyClass CRD template: %w", err)
}
if err := os.Remove(filepath.Join(baseDir, dnsConfigCRDHelmTemplatePath)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up DNSConfig CRD template: %w", err)
}
return nil
}

@ -56,6 +56,9 @@ func Test_generate(t *testing.T) {
if !strings.Contains(installContentsWithCRD.String(), "name: proxyclasses.tailscale.com") {
t.Errorf("ProxyClass CRD not found in default chart install")
}
if !strings.Contains(installContentsWithCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD not found in default chart install")
}
// Test that CRDs can be excluded from Helm chart install
installContentsWithoutCRD := bytes.NewBuffer([]byte{})
@ -71,4 +74,7 @@ func Test_generate(t *testing.T) {
if strings.Contains(installContentsWithoutCRD.String(), "name: connectors.tailscale.com") {
t.Errorf("ProxyClass CRD found in chart install that should not contain a CRD")
}
if strings.Contains(installContentsWithoutCRD.String(), "name: dnsconfigs.tailscale.com") {
t.Errorf("DNSConfig CRD found in chart install that should not contain a CRD")
}
}

@ -88,12 +88,11 @@ func TestTailscaleIngress(t *testing.T) {
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
}
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
@ -101,9 +100,9 @@ func TestTailscaleIngress(t *testing.T) {
}
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"), nil)
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 2. Ingress status gets updated with ingress proxy's MagicDNS name
// once that becomes available.
@ -118,7 +117,7 @@ func TestTailscaleIngress(t *testing.T) {
{Hostname: "foo.tailnetxyz.ts.net", Ports: []networkingv1.IngressPortStatus{{Port: 443, Protocol: "TCP"}}},
},
}
expectEqual(t, fc, ing)
expectEqual(t, fc, ing, nil)
// 3. Resources get created for Ingress that should allow forwarding
// cluster traffic
@ -126,11 +125,8 @@ func TestTailscaleIngress(t *testing.T) {
mak.Set(&ing.ObjectMeta.Annotations, AnnotationExperimentalForwardClusterTrafficViaL7IngresProxy, "true")
})
opts.shouldEnableForwardingClusterTrafficViaIngress = true
// configfile hash changed at this point in test env only because we
// lost auth key due to how changes are applied in test client.
opts.confFileHash = "fb9006e30ecda75e88c29dcd0ca2dd28a2ae964d001c66e1be3efe159cc3821d"
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. Resources get cleaned up when Ingress class is unset
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
@ -223,12 +219,11 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
}
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
@ -236,9 +231,9 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
}
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"), nil)
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 2. Ingress is updated to specify a ProxyClass, ProxyClass is not yet
// ready, so proxy resource configuration does not change.
@ -246,7 +241,7 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
mak.Set(&ing.ObjectMeta.Labels, LabelProxyClass, "custom-metadata")
})
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready by proxy-class reconciler. Ingress get
// reconciled and configuration from the ProxyClass is applied to the
@ -261,10 +256,7 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
})
expectReconciled(t, ingR, "default", "test")
opts.proxyClass = pc.Name
// configfile hash changed at this point in test env only because we
// lost auth key due to how changes are applied in test client.
opts.confFileHash = "fb9006e30ecda75e88c29dcd0ca2dd28a2ae964d001c66e1be3efe159cc3821d"
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 4. tailscale.com/proxy-class label is removed from the Ingress, the
// Ingress gets reconciled and the custom ProxyClass configuration is
@ -274,5 +266,5 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
})
expectReconciled(t, ingR, "default", "test")
opts.proxyClass = ""
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
}

@ -0,0 +1,283 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"fmt"
"slices"
"sync"
_ "embed"
"github.com/pkg/errors"
"go.uber.org/zap"
xslices "golang.org/x/exp/slices"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstime"
"tailscale.com/util/clientmetric"
"tailscale.com/util/set"
)
const (
reasonNameserverCreationFailed = "NameserverCreationFailed"
reasonMultipleDNSConfigsPresent = "MultipleDNSConfigsPresent"
reasonNameserverCreated = "NameserverCreated"
messageNameserverCreationFailed = "Failed creating nameserver resources: %v"
messageMultipleDNSConfigsPresent = "Multiple DNSConfig resources found in cluster. Please ensure no more than one is present."
defaultNameserverImageRepo = "tailscale/k8s-nameserver"
// TODO (irbekrm): once we start publishing nameserver images for stable
// track, replace 'unstable' here with the version of this operator
// instance.
defaultNameserverImageTag = "unstable"
)
// NameserverReconciler knows how to create nameserver resources in cluster in
// response to users applying DNSConfig.
type NameserverReconciler struct {
client.Client
logger *zap.SugaredLogger
recorder record.EventRecorder
clock tstime.Clock
tsNamespace string
mu sync.Mutex // protects following
managedNameservers set.Slice[types.UID] // one or none
}
var (
gaugeNameserverResources = clientmetric.NewGauge("k8s_nameserver_resources")
)
func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
logger := a.logger.With("dnsConfig", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
var dnsCfg tsapi.DNSConfig
err = a.Get(ctx, req.NamespacedName, &dnsCfg)
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
logger.Debugf("dnsconfig not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get dnsconfig: %w", err)
}
if !dnsCfg.DeletionTimestamp.IsZero() {
ix := xslices.Index(dnsCfg.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return reconcile.Result{}, nil
}
logger.Info("Cleaning up DNSConfig resources")
if err := a.maybeCleanup(ctx, &dnsCfg, logger); err != nil {
logger.Errorf("error cleaning up reconciler resource: %v", err)
return res, err
}
dnsCfg.Finalizers = append(dnsCfg.Finalizers[:ix], dnsCfg.Finalizers[ix+1:]...)
if err := a.Update(ctx, &dnsCfg); err != nil {
logger.Errorf("error removing finalizer: %v", err)
return reconcile.Result{}, err
}
logger.Infof("Nameserver resources cleaned up")
return reconcile.Result{}, nil
}
oldCnStatus := dnsCfg.Status.DeepCopy()
setStatus := func(dnsCfg *tsapi.DNSConfig, conditionType tsapi.ConnectorConditionType, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetDNSConfigCondition(dnsCfg, tsapi.NameserverReady, status, reason, message, dnsCfg.Generation, a.clock, logger)
if !apiequality.Semantic.DeepEqual(oldCnStatus, dnsCfg.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := a.Client.Status().Update(ctx, dnsCfg); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
}
}
return res, err
}
var dnsCfgs tsapi.DNSConfigList
if err := a.List(ctx, &dnsCfgs); err != nil {
return res, fmt.Errorf("error listing DNSConfigs: %w", err)
}
if len(dnsCfgs.Items) > 1 { // enforce DNSConfig to be a singleton
msg := "invalid cluster configuration: more than one tailscale.com/dnsconfigs found. Please ensure that no more than one is created."
logger.Error(msg)
a.recorder.Event(&dnsCfg, corev1.EventTypeWarning, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
}
if !slices.Contains(dnsCfg.Finalizers, FinalizerName) {
logger.Infof("ensuring nameserver resources")
dnsCfg.Finalizers = append(dnsCfg.Finalizers, FinalizerName)
if err := a.Update(ctx, &dnsCfg); err != nil {
msg := fmt.Sprintf(messageNameserverCreationFailed, err)
logger.Error(msg)
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonNameserverCreationFailed, msg)
}
}
if err := a.maybeProvision(ctx, &dnsCfg, logger); err != nil {
return reconcile.Result{}, fmt.Errorf("error provisioning nameserver resources: %w", err)
}
a.mu.Lock()
a.managedNameservers.Add(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: a.tsNamespace},
}
if err := a.Client.Get(ctx, client.ObjectKeyFromObject(svc), svc); err != nil {
return res, fmt.Errorf("error getting Service: %w", err)
}
if ip := svc.Spec.ClusterIP; ip != "" && ip != "None" {
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: ip,
}
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated)
}
logger.Info("nameserver Service does not have an IP address allocated, waiting...")
return reconcile.Result{}, nil
}
func nameserverResourceLabels(name, namespace string) map[string]string {
labels := childResourceLabels(name, namespace, "nameserver")
labels["app.kubernetes.io/name"] = "tailscale"
labels["app.kubernetes.io/component"] = "nameserver"
return labels
}
func (a *NameserverReconciler) maybeProvision(ctx context.Context, tsDNSCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
labels := nameserverResourceLabels(tsDNSCfg.Name, a.tsNamespace)
dCfg := &deployConfig{
ownerRefs: []metav1.OwnerReference{*metav1.NewControllerRef(tsDNSCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))},
namespace: a.tsNamespace,
labels: labels,
imageRepo: defaultNameserverImageRepo,
imageTag: defaultNameserverImageTag,
}
if tsDNSCfg.Spec.Nameserver.Image != nil && tsDNSCfg.Spec.Nameserver.Image.Repo != "" {
dCfg.imageRepo = tsDNSCfg.Spec.Nameserver.Image.Repo
}
if tsDNSCfg.Spec.Nameserver.Image != nil && tsDNSCfg.Spec.Nameserver.Image.Tag != "" {
dCfg.imageTag = tsDNSCfg.Spec.Nameserver.Image.Tag
}
for _, deployable := range []deployable{saDeployable, deployDeployable, svcDeployable, cmDeployable} {
if err := deployable.updateObj(ctx, dCfg, a.Client); err != nil {
return fmt.Errorf("error reconciling %s: %w", deployable.kind, err)
}
}
return nil
}
// maybeCleanup removes DNSConfig from being tracked. The cluster resources
// created, will be automatically garbage collected as they are owned by the
// DNSConfig.
func (a *NameserverReconciler) maybeCleanup(ctx context.Context, dnsCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
a.mu.Lock()
a.managedNameservers.Remove(dnsCfg.UID)
a.mu.Unlock()
gaugeNameserverResources.Set(int64(a.managedNameservers.Len()))
return nil
}
type deployable struct {
kind string
updateObj func(context.Context, *deployConfig, client.Client) error
}
type deployConfig struct {
imageRepo string
imageTag string
labels map[string]string
ownerRefs []metav1.OwnerReference
namespace string
}
var (
//go:embed deploy/manifests/nameserver/cm.yaml
cmYaml []byte
//go:embed deploy/manifests/nameserver/deploy.yaml
deployYaml []byte
//go:embed deploy/manifests/nameserver/sa.yaml
saYaml []byte
//go:embed deploy/manifests/nameserver/svc.yaml
svcYaml []byte
deployDeployable = deployable{
kind: "Deployment",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
d := new(appsv1.Deployment)
if err := yaml.Unmarshal(deployYaml, &d); err != nil {
return fmt.Errorf("error unmarshalling Deployment yaml: %w", err)
}
d.Spec.Template.Spec.Containers[0].Image = fmt.Sprintf("%s:%s", cfg.imageRepo, cfg.imageTag)
d.ObjectMeta.Namespace = cfg.namespace
d.ObjectMeta.Labels = cfg.labels
d.ObjectMeta.OwnerReferences = cfg.ownerRefs
updateF := func(oldD *appsv1.Deployment) {
oldD.Spec = d.Spec
}
_, err := createOrUpdate[appsv1.Deployment](ctx, kubeClient, cfg.namespace, d, updateF)
return err
},
}
saDeployable = deployable{
kind: "ServiceAccount",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
sa := new(corev1.ServiceAccount)
if err := yaml.Unmarshal(saYaml, &sa); err != nil {
return fmt.Errorf("error unmarshalling ServiceAccount yaml: %w", err)
}
sa.ObjectMeta.Labels = cfg.labels
sa.ObjectMeta.OwnerReferences = cfg.ownerRefs
sa.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate(ctx, kubeClient, cfg.namespace, sa, func(*corev1.ServiceAccount) {})
return err
},
}
svcDeployable = deployable{
kind: "Service",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
svc := new(corev1.Service)
if err := yaml.Unmarshal(svcYaml, &svc); err != nil {
return fmt.Errorf("error unmarshalling Service yaml: %w", err)
}
svc.ObjectMeta.Labels = cfg.labels
svc.ObjectMeta.OwnerReferences = cfg.ownerRefs
svc.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.Service](ctx, kubeClient, cfg.namespace, svc, func(*corev1.Service) {})
return err
},
}
cmDeployable = deployable{
kind: "ConfigMap",
updateObj: func(ctx context.Context, cfg *deployConfig, kubeClient client.Client) error {
cm := new(corev1.ConfigMap)
if err := yaml.Unmarshal(cmYaml, &cm); err != nil {
return fmt.Errorf("error unmarshalling ConfigMap yaml: %w", err)
}
cm.ObjectMeta.Labels = cfg.labels
cm.ObjectMeta.OwnerReferences = cfg.ownerRefs
cm.ObjectMeta.Namespace = cfg.namespace
_, err := createOrUpdate[corev1.ConfigMap](ctx, kubeClient, cfg.namespace, cm, func(cm *corev1.ConfigMap) {})
return err
},
}
)

@ -0,0 +1,127 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet and to make Tailscale nodes available to cluster
// workloads
package main
import (
"encoding/json"
"testing"
"time"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/yaml"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/util/mak"
)
func TestNameserverReconciler(t *testing.T) {
dnsCfg := &tsapi.DNSConfig{
TypeMeta: metav1.TypeMeta{Kind: "DNSConfig", APIVersion: "tailscale.com/v1alpha1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: tsapi.DNSConfigSpec{
Nameserver: &tsapi.Nameserver{
Image: &tsapi.Image{
Repo: "test",
Tag: "v0.0.1",
},
},
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(dnsCfg).
WithStatusSubresource(dnsCfg).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
nr := &NameserverReconciler{
Client: fc,
clock: cl,
logger: zl.Sugar(),
tsNamespace: "tailscale",
}
expectReconciled(t, nr, "", "test")
// Verify that nameserver Deployment has been created and has the expected fields.
wantsDeploy := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: "nameserver", Namespace: "tailscale"}, TypeMeta: metav1.TypeMeta{Kind: "Deployment", APIVersion: appsv1.SchemeGroupVersion.Identifier()}}
if err := yaml.Unmarshal(deployYaml, wantsDeploy); err != nil {
t.Fatalf("unmarshalling yaml: %v", err)
}
dnsCfgOwnerRef := metav1.NewControllerRef(dnsCfg, tsapi.SchemeGroupVersion.WithKind("DNSConfig"))
wantsDeploy.OwnerReferences = []metav1.OwnerReference{*dnsCfgOwnerRef}
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.1"
wantsDeploy.Namespace = "tailscale"
labels := nameserverResourceLabels("test", "tailscale")
wantsDeploy.ObjectMeta.Labels = labels
expectEqual(t, fc, wantsDeploy, nil)
// Verify that DNSConfig advertizes the nameserver's Service IP address,
// has the ready status condition and tailscale finalizer.
mustUpdate(t, fc, "tailscale", "nameserver", func(svc *corev1.Service) {
svc.Spec.ClusterIP = "1.2.3.4"
})
expectReconciled(t, nr, "", "test")
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: "1.2.3.4",
}
dnsCfg.Finalizers = []string{FinalizerName}
dnsCfg.Status.Conditions = append(dnsCfg.Status.Conditions, tsapi.ConnectorCondition{
Type: tsapi.NameserverReady,
Status: metav1.ConditionTrue,
Reason: reasonNameserverCreated,
Message: reasonNameserverCreated,
LastTransitionTime: &metav1.Time{Time: cl.Now().Truncate(time.Second)},
})
expectEqual(t, fc, dnsCfg, nil)
// // Verify that nameserver image gets updated to match DNSConfig spec.
mustUpdate(t, fc, "", "test", func(dnsCfg *tsapi.DNSConfig) {
dnsCfg.Spec.Nameserver.Image.Tag = "v0.0.2"
})
expectReconciled(t, nr, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "test:v0.0.2"
expectEqual(t, fc, wantsDeploy, nil)
// Verify that when another actor sets ConfigMap data, it does not get
// overwritten by nameserver reconciler.
dnsRecords := &operatorutils.Records{Version: "v1alpha1", IP4: map[string][]string{"foo.ts.net": {"1.2.3.4"}}}
bs, err := json.Marshal(dnsRecords)
if err != nil {
t.Fatalf("error marshalling ConfigMap contents: %v", err)
}
mustUpdate(t, fc, "tailscale", "dnsrecords", func(cm *corev1.ConfigMap) {
mak.Set(&cm.Data, "records.json", string(bs))
})
expectReconciled(t, nr, "", "test")
wantCm := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "dnsrecords",
Namespace: "tailscale", Labels: labels, OwnerReferences: []metav1.OwnerReference{*dnsCfgOwnerRef}},
TypeMeta: metav1.TypeMeta{Kind: "ConfigMap", APIVersion: "v1"},
Data: map[string]string{"records.json": string(bs)},
}
expectEqual(t, fc, wantCm, nil)
// Verify that if dnsconfig.spec.nameserver.image.{repo,tag} are unset,
// the nameserver image defaults to tailscale/k8s-nameserver:unstable.
mustUpdate(t, fc, "", "test", func(dnsCfg *tsapi.DNSConfig) {
dnsCfg.Spec.Nameserver.Image = nil
})
expectReconciled(t, nr, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "tailscale/k8s-nameserver:unstable"
expectEqual(t, fc, wantsDeploy, nil)
}

@ -20,6 +20,7 @@ import (
"golang.org/x/oauth2/clientcredentials"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/rest"
@ -59,12 +60,13 @@ func main() {
tailscale.I_Acknowledge_This_API_Is_Unstable = true
var (
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
tsFirewallMode = defaultEnv("PROXY_FIREWALL_MODE", "")
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
var opts []kzap.Opts
@ -93,9 +95,19 @@ func main() {
defer s.Close()
restConfig := config.GetConfigOrDie()
maybeLaunchAPIServerProxy(zlog, restConfig, s, mode)
// TODO (irbekrm): gather the reconciler options into an opts struct
// rather than passing a million of them in one by one.
runReconcilers(zlog, s, tsNamespace, restConfig, tsClient, image, priorityClassName, tags, tsFirewallMode)
rOpts := reconcilerOpts{
log: zlog,
tsServer: s,
tsClient: tsClient,
tailscaleNamespace: tsNamespace,
restConfig: restConfig,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
proxyActAsDefaultLoadBalancer: isDefaultLoadBalancer,
proxyTags: tags,
proxyFirewallMode: tsFirewallMode,
}
runReconcilers(rOpts)
}
// initTSNet initializes the tsnet.Server and logs in to Tailscale. It uses the
@ -203,11 +215,8 @@ waitOnline:
// runReconcilers starts the controller-runtime manager and registers the
// ServiceReconciler. It blocks forever.
func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string, restConfig *rest.Config, tsClient *tailscale.Client, image, priorityClassName, tags, tsFirewallMode string) {
var (
isDefaultLoadBalancer = defaultBool("OPERATOR_DEFAULT_LOAD_BALANCER", false)
)
startlog := zlog.Named("startReconcilers")
func runReconcilers(opts reconcilerOpts) {
startlog := opts.log.Named("startReconcilers")
// For secrets and statefulsets, we only get permission to touch the objects
// in the controller's own namespace. This cannot be expressed by
// .Watches(...) below, instead you have to add a per-type field selector to
@ -215,7 +224,7 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// implicitly filter what parts of the world the builder code gets to see at
// all.
nsFilter := cache.ByObject{
Field: client.InNamespace(tsNamespace).AsSelector(),
Field: client.InNamespace(opts.tailscaleNamespace).AsSelector(),
}
mgrOpts := manager.Options{
// TODO (irbekrm): stricter filtering what we watch/cache/call
@ -223,33 +232,37 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
// resources that we GET via the controller manager's client.
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.Secret{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&corev1.Secret{}: nsFilter,
&corev1.ServiceAccount{}: nsFilter,
&corev1.ConfigMap{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&appsv1.Deployment{}: nsFilter,
&discoveryv1.EndpointSlice{}: nsFilter,
},
},
Scheme: tsapi.GlobalScheme,
}
mgr, err := manager.New(restConfig, mgrOpts)
mgr, err := manager.New(opts.restConfig, mgrOpts)
if err != nil {
startlog.Fatalf("could not create manager: %v", err)
}
svcFilter := handler.EnqueueRequestsFromMapFunc(serviceHandler)
svcChildFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("svc"))
// If a ProxyClassChanges, enqueue all Services labeled with that
// If a ProxyClass changes, enqueue all Services labeled with that
// ProxyClass's name.
proxyClassFilterForSvc := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForSvc(mgr.GetClient(), startlog))
eventRecorder := mgr.GetEventRecorderFor("tailscale-operator")
ssr := &tailscaleSTSReconciler{
Client: mgr.GetClient(),
tsnetServer: s,
tsClient: tsClient,
defaultTags: strings.Split(tags, ","),
operatorNamespace: tsNamespace,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
tsFirewallMode: tsFirewallMode,
tsnetServer: opts.tsServer,
tsClient: opts.tsClient,
defaultTags: strings.Split(opts.proxyTags, ","),
operatorNamespace: opts.tailscaleNamespace,
proxyImage: opts.proxyImage,
proxyPriorityClassName: opts.proxyPriorityClassName,
tsFirewallMode: opts.proxyFirewallMode,
}
err = builder.
ControllerManagedBy(mgr).
@ -261,9 +274,10 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
Complete(&ServiceReconciler{
ssr: ssr,
Client: mgr.GetClient(),
logger: zlog.Named("service-reconciler"),
isDefaultLoadBalancer: isDefaultLoadBalancer,
logger: opts.log.Named("service-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
})
if err != nil {
startlog.Fatalf("could not create service reconciler: %v", err)
@ -285,7 +299,7 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("ingress-reconciler"),
logger: opts.log.Named("ingress-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create ingress reconciler: %v", err)
@ -304,29 +318,201 @@ func runReconcilers(zlog *zap.SugaredLogger, s *tsnet.Server, tsNamespace string
ssr: ssr,
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: zlog.Named("connector-reconciler"),
logger: opts.log.Named("connector-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create connector reconciler: %v", err)
startlog.Fatalf("could not create connector reconciler: %v", err)
}
// TODO (irbekrm): switch to metadata-only watches for resources whose
// spec we don't need to inspect to reduce memory consumption.
// https://github.com/kubernetes-sigs/controller-runtime/issues/1159
nameserverFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("nameserver"))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.DNSConfig{}).
Watches(&appsv1.Deployment{}, nameserverFilter).
Watches(&corev1.ConfigMap{}, nameserverFilter).
Watches(&corev1.Service{}, nameserverFilter).
Watches(&corev1.ServiceAccount{}, nameserverFilter).
Complete(&NameserverReconciler{
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
Client: mgr.GetClient(),
logger: opts.log.Named("nameserver-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatalf("could not create nameserver reconciler: %v", err)
}
err = builder.ControllerManagedBy(mgr).
For(&tsapi.ProxyClass{}).
Complete(&ProxyClassReconciler{
Client: mgr.GetClient(),
recorder: eventRecorder,
logger: zlog.Named("proxyclass-reconciler"),
logger: opts.log.Named("proxyclass-reconciler"),
clock: tstime.DefaultClock{},
})
if err != nil {
startlog.Fatal("could not create proxyclass reconciler: %v", err)
}
logger := startlog.Named("dns-records-reconciler-event-handlers")
// On EndpointSlice events, if it is an EndpointSlice for an
// ingress/egress proxy headless Service, reconcile the headless
// Service.
dnsRREpsOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerEndpointSliceHandler)
// On DNSConfig changes, reconcile all headless Services for
// ingress/egress proxies in operator namespace.
dnsRRDNSConfigOpts := handler.EnqueueRequestsFromMapFunc(enqueueAllIngressEgressProxySvcsInNS(opts.tailscaleNamespace, mgr.GetClient(), logger))
// On Service events, if it is an ingress/egress proxy headless Service, reconcile it.
dnsRRServiceOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerServiceHandler)
// On Ingress events, if it is a tailscale Ingress or if tailscale is the default ingress controller, reconcile the proxy
// headless Service.
dnsRRIngressOpts := handler.EnqueueRequestsFromMapFunc(dnsRecordsReconcilerIngressHandler(opts.tailscaleNamespace, opts.proxyActAsDefaultLoadBalancer, mgr.GetClient(), logger))
err = builder.ControllerManagedBy(mgr).
Named("dns-records-reconciler").
Watches(&corev1.Service{}, dnsRRServiceOpts).
Watches(&networkingv1.Ingress{}, dnsRRIngressOpts).
Watches(&discoveryv1.EndpointSlice{}, dnsRREpsOpts).
Watches(&tsapi.DNSConfig{}, dnsRRDNSConfigOpts).
Complete(&dnsRecordsReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
logger: opts.log.Named("dns-records-reconciler"),
isDefaultLoadBalancer: opts.proxyActAsDefaultLoadBalancer,
})
if err != nil {
startlog.Fatalf("could not create DNS records reconciler: %v", err)
}
startlog.Infof("Startup complete, operator running, version: %s", version.Long())
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
startlog.Fatalf("could not start manager: %v", err)
}
}
type reconcilerOpts struct {
log *zap.SugaredLogger
tsServer *tsnet.Server
tsClient *tailscale.Client
tailscaleNamespace string // namespace in which operator resources will be deployed
restConfig *rest.Config // config for connecting to the kube API server
proxyImage string // <proxy-image-repo>:<proxy-image-tag>
// proxyPriorityClassName isPriorityClass to be set for proxy Pods. This
// is a legacy mechanism for cluster resource configuration options -
// going forward use ProxyClass.
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
proxyPriorityClassName string
// proxyTags are ACL tags to tag proxy auth keys. Multiple tags should
// be provided as a string with comma-separated tag values. Proxy tags
// default to tag:k8s.
// https://tailscale.com/kb/1085/auth-keys
proxyTags string
// proxyActAsDefaultLoadBalancer determines whether this operator
// instance should act as the default ingress controller when looking at
// Ingress resources with unset ingress.spec.ingressClassName.
// TODO (irbekrm): this setting does not respect the default
// IngressClass.
// https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class
// We should fix that and preferably integrate with that mechanism as
// well - perhaps make the operator itself create the default
// IngressClass if this is set to true.
proxyActAsDefaultLoadBalancer bool
// proxyFirewallMode determines whether non-userspace proxies should use
// iptables or nftables for firewall configuration. Accepted values are
// iptables, nftables and auto. If set to auto, proxy will automatically
// determine which mode is supported for a given host (prefer nftables).
// Auto is usually the best choice, unless you want to explicitly set
// specific mode for debugging purposes.
proxyFirewallMode string
}
// enqueueAllIngressEgressProxySvcsinNS returns a reconcile request for each
// ingress/egress proxy headless Service found in the provided namespace.
func enqueueAllIngressEgressProxySvcsInNS(ns string, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, _ client.Object) []reconcile.Request {
reqs := make([]reconcile.Request, 0)
// Get all headless Services for proxies configured using Service.
svcProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
}
svcHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, svcHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(svcProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale ingress/egress Services in operator namespace: %v", err)
return nil
}
for _, svc := range svcHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
// Get all headless Services for proxies configured using Ingress.
ingProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "ingress",
}
ingHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, ingHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(ingProxyLabels)); err != nil {
logger.Errorf("error listing headless Services for tailscale Ingresses in operator namespace: %v", err)
return nil
}
for _, svc := range ingHeadlessSvcList.Items {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}})
}
return reqs
}
}
// dnsRecordsReconciler filters EndpointSlice events for which
// dns-records-reconciler should reconcile a headless Service. The only events
// it should reconcile are those for EndpointSlices associated with proxy
// headless Services.
func dnsRecordsReconcilerEndpointSliceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if !isManagedByType(o, "svc") && !isManagedByType(o, "ingress") {
return nil
}
headlessSvcName, ok := o.GetLabels()[discoveryv1.LabelServiceName] // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
if !ok {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: headlessSvcName}}}
}
// dnsRecordsReconcilerServiceHandler filters Service events for which
// dns-records-reconciler should reconcile. If the event is for a cluster
// ingress/cluster egress proxy's headless Service, returns the Service for
// reconcile.
func dnsRecordsReconcilerServiceHandler(ctx context.Context, o client.Object) []reconcile.Request {
if isManagedByType(o, "svc") || isManagedByType(o, "ingress") {
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: o.GetNamespace(), Name: o.GetName()}}}
}
return nil
}
// dnsRecordsReconcilerIngressHandler filters Ingress events to ensure that
// dns-records-reconciler only reconciles on tailscale Ingress events. When an
// event is observed on a tailscale Ingress, reconcile the proxy headless Service.
func dnsRecordsReconcilerIngressHandler(ns string, isDefaultLoadBalancer bool, cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
ing, ok := o.(*networkingv1.Ingress)
if !ok {
return nil
}
if !isDefaultLoadBalancer && (ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != "tailscale") {
return nil
}
proxyResourceLabels := childResourceLabels(ing.Name, ing.Namespace, "ingress")
headlessSvc, err := getSingleObject[corev1.Service](ctx, cl, ns, proxyResourceLabels)
if err != nil {
logger.Errorf("error getting headless Service from parent labels: %v", err)
return nil
}
if headlessSvc == nil {
return nil
}
return []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: headlessSvc.Namespace, Name: headlessSvc.Name}}}
}
}
type tsClient interface {
CreateKey(ctx context.Context, caps tailscale.KeyCapabilities) (string, *tailscale.Key, error)
DeleteDevice(ctx context.Context, nodeStableID string) error

@ -20,7 +20,9 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/net/dns/resolvconffile"
"tailscale.com/types/ptr"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak"
)
@ -71,12 +73,11 @@ func TestLoadBalancerClass(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, then verify reconcile again and verify
@ -119,7 +120,7 @@ func TestLoadBalancerClass(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -158,7 +159,7 @@ func TestLoadBalancerClass(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestTailnetTargetFQDNAnnotation(t *testing.T) {
@ -213,12 +214,11 @@ func TestTailnetTargetFQDNAnnotation(t *testing.T) {
parentType: "svc",
tailnetTargetFQDN: tailnetTargetFQDN,
hostname: "default-test",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -239,10 +239,10 @@ func TestTailnetTargetFQDNAnnotation(t *testing.T) {
Selector: nil,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, want, nil)
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Change the tailscale-target-fqdn annotation which should update the
// StatefulSet
@ -324,12 +324,11 @@ func TestTailnetTargetIPAnnotation(t *testing.T) {
parentType: "svc",
tailnetTargetIP: tailnetTargetIP,
hostname: "default-test",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -350,10 +349,10 @@ func TestTailnetTargetIPAnnotation(t *testing.T) {
Selector: nil,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, want, nil)
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Change the tailscale-target-ip annotation which should update the
// StatefulSet
@ -432,12 +431,11 @@ func TestAnnotations(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -457,7 +455,7 @@ func TestAnnotations(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -489,7 +487,7 @@ func TestAnnotations(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestAnnotationIntoLB(t *testing.T) {
@ -541,12 +539,11 @@ func TestAnnotationIntoLB(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, since it would have normally happened at
@ -579,7 +576,7 @@ func TestAnnotationIntoLB(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Remove Tailscale's annotation, and at the same time convert the service
// into a tailscale LoadBalancer.
@ -590,10 +587,8 @@ func TestAnnotationIntoLB(t *testing.T) {
})
expectReconciled(t, sr, "default", "test")
// None of the proxy machinery should have changed...
// (although configfile hash will change in test env only because we lose auth key due to out test not syncing secret.StringData -> secret.Data)
o.confFileHash = "fb9006e30ecda75e88c29dcd0ca2dd28a2ae964d001c66e1be3efe159cc3821d"
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// ... but the service should have a LoadBalancer status.
want = &corev1.Service{
@ -625,7 +620,7 @@ func TestAnnotationIntoLB(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestLBIntoAnnotation(t *testing.T) {
@ -675,12 +670,11 @@ func TestLBIntoAnnotation(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
// Normally the Tailscale proxy pod would come up here and write its info
// into the secret. Simulate that, then verify reconcile again and verify
@ -723,7 +717,7 @@ func TestLBIntoAnnotation(t *testing.T) {
},
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, but also add the
// tailscale annotation.
@ -742,12 +736,8 @@ func TestLBIntoAnnotation(t *testing.T) {
})
expectReconciled(t, sr, "default", "test")
// configfile hash changes on a re-apply in this case in tests only as
// we lose the auth key due to the test apply not syncing
// secret.StringData -> Data.
o.confFileHash = "fb9006e30ecda75e88c29dcd0ca2dd28a2ae964d001c66e1be3efe159cc3821d"
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want = &corev1.Service{
TypeMeta: metav1.TypeMeta{
@ -768,7 +758,7 @@ func TestLBIntoAnnotation(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestCustomHostname(t *testing.T) {
@ -821,12 +811,11 @@ func TestCustomHostname(t *testing.T) {
parentType: "svc",
hostname: "reindeer-flotilla",
clusterTargetIP: "10.20.30.40",
confFileHash: "42376226c7d76ed6d6318315dc6c402f7d993bc0b01a5b0e6c8a833106b7509e",
}
expectEqual(t, fc, expectedSecret(t, o))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSecret(t, o), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
want := &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
@ -847,7 +836,7 @@ func TestCustomHostname(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
// Turn the service back into a ClusterIP service, which should make the
// operator clean up.
@ -882,7 +871,7 @@ func TestCustomHostname(t *testing.T) {
Type: corev1.ServiceTypeClusterIP,
},
}
expectEqual(t, fc, want)
expectEqual(t, fc, want, nil)
}
func TestCustomPriorityClassName(t *testing.T) {
@ -937,10 +926,9 @@ func TestCustomPriorityClassName(t *testing.T) {
hostname: "tailscale-critical",
priorityClassName: "custom-priority-class-name",
clusterTargetIP: "10.20.30.40",
confFileHash: "13cdef0d5f6f0f2406af028710ea1e0f99f65aba4021e4e70ac75a73cf141fd1",
}
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestProxyClassForService(t *testing.T) {
@ -1000,11 +988,10 @@ func TestProxyClassForService(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSecret(t, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. The Service gets updated with tailscale.com/proxy-class label
// pointing at the 'custom-metadata' ProxyClass. The ProxyClass is not
@ -1013,7 +1000,7 @@ func TestProxyClassForService(t *testing.T) {
mak.Set(&svc.Labels, LabelProxyClass, "custom-metadata")
})
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 3. ProxyClass is set to Ready, the Service gets reconciled by the
// services-reconciler and the customization from the ProxyClass is
@ -1027,12 +1014,8 @@ func TestProxyClassForService(t *testing.T) {
}}}
})
opts.proxyClass = pc.Name
// configfile hash changes on a second apply in test env only because we
// lose auth key due to out test not syncing secret.StringData ->
// secret.Data
opts.confFileHash = "fb9006e30ecda75e88c29dcd0ca2dd28a2ae964d001c66e1be3efe159cc3821d"
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 4. tailscale.com/proxy-class label is removed from the Service, the
// configuration from the ProxyClass is removed from the cluster
@ -1042,7 +1025,7 @@ func TestProxyClassForService(t *testing.T) {
})
opts.proxyClass = ""
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts))
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func TestDefaultLoadBalancer(t *testing.T) {
@ -1086,7 +1069,7 @@ func TestDefaultLoadBalancer(t *testing.T) {
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"))
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
o := configOpts{
stsName: shortName,
secretName: fullName,
@ -1094,9 +1077,9 @@ func TestDefaultLoadBalancer(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestProxyFirewallMode(t *testing.T) {
@ -1148,12 +1131,71 @@ func TestProxyFirewallMode(t *testing.T) {
hostname: "default-test",
firewallMode: "nftables",
clusterTargetIP: "10.20.30.40",
confFileHash: "6cceb342cd3e1c56cd1bd94c29df63df3653c35fe98a7e7afcdee0dcaa2ad549",
}
expectEqual(t, fc, expectedSTS(t, fc, o))
expectEqual(t, fc, expectedSTS(t, fc, o), removeHashAnnotation)
}
func TestTailscaledConfigfileHash(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
isDefaultLoadBalancer: true,
}
// Create a service that we should manage, and check that the initial round
// of objects looks right.
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
o := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "e09bededa0379920141cbd0b0dbdf9b8b66545877f9e8397423f5ce3e1ba439e",
}
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
// 2. Hostname gets changed, configfile is updated and a new hash value
// is produced.
mustUpdate(t, fc, "default", "test", func(svc *corev1.Service) {
mak.Set(&svc.Annotations, AnnotationHostname, "another-test")
})
o.hostname = "another-test"
o.confFileHash = "5d754cf55463135ee34aa9821f2fd8483b53eb0570c3740c84a086304f427684"
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
}
func Test_isMagicDNSName(t *testing.T) {
tests := []struct {
in string
@ -1311,3 +1353,148 @@ func Test_serviceHandlerForIngress(t *testing.T) {
t.Errorf("unexpected reconcile request for a Service that does not belong to any Ingress: %#+v\n", gotReqs)
}
}
func Test_clusterDomainFromResolverConf(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tests := []struct {
name string
conf *resolvconffile.Config
namespace string
want string
}{
{
name: "success- custom domain",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "department.org.io")},
},
namespace: "foo",
want: "department.org.io",
},
{
name: "success- default domain",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.cluster.local."), toFQDN(t, "svc.cluster.local."), toFQDN(t, "cluster.local.")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "only two search domains found",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "svc.department.org.io"), toFQDN(t, "department.org.io")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "first search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.bar.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "second search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "foo.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
{
name: "third search domain does not match the expected structure",
conf: &resolvconffile.Config{
SearchDomains: []dnsname.FQDN{toFQDN(t, "foo.svc.department.org.io"), toFQDN(t, "svc.department.org.io"), toFQDN(t, "some.other.fqdn")},
},
namespace: "foo",
want: "cluster.local",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := clusterDomainFromResolverConf(tt.conf, tt.namespace, zl.Sugar()); got != tt.want {
t.Errorf("clusterDomainFromResolverConf() = %v, want %v", got, tt.want)
}
})
}
}
func Test_externalNameService(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// 1. A External name Service that should be exposed via Tailscale gets
// created.
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Create an ExternalName Service that we should manage, and check that the initial round
// of objects looks right.
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationExpose: "true",
},
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeExternalName,
ExternalName: "foo.com",
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetDNS: "foo.com",
}
expectEqual(t, fc, expectedSecret(t, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. Change the ExternalName and verify that changes get propagated.
mustUpdate(t, sr, "default", "test", func(s *corev1.Service) {
s.Spec.ExternalName = "bar.com"
})
expectReconciled(t, sr, "default", "test")
opts.clusterTargetDNS = "bar.com"
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func toFQDN(t *testing.T, s string) dnsname.FQDN {
t.Helper()
fqdn, err := dnsname.ToFQDN(s)
if err != nil {
t.Fatalf("error coverting %q to dnsname.FQDN: %v", s, err)
}
return fqdn
}

@ -3,13 +3,12 @@
//go:build !plan9
// tailscale-operator provides a way to expose services running in a Kubernetes
// cluster to your Tailnet.
package main
import (
"context"
"fmt"
"strings"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
@ -30,7 +29,9 @@ import (
const (
reasonProxyClassInvalid = "ProxyClassInvalid"
reasonProxyClassValid = "ProxyClassValid"
reasonCustomTSEnvVar = "CustomTSEnvVar"
messageProxyClassInvalid = "ProxyClass is not valid: %v"
messageCustomTSEnvVar = "ProxyClass overrides the default value for %s env var for %s container. Running with custom values for Tailscale env vars is not recommended and might break in the future."
)
type ProxyClassReconciler struct {
@ -98,6 +99,19 @@ func (a *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations field.
violations = append(violations, errs...)
}
}
if tc := pod.TailscaleContainer; tc != nil {
for _, e := range tc.Env {
if strings.HasPrefix(string(e.Name), "TS_") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
if strings.EqualFold(string(e.Name), "EXPERIMENTAL_TS_CONFIGFILE_PATH") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
if strings.EqualFold(string(e.Name), "EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS") {
a.recorder.Event(pc, corev1.EventTypeWarning, reasonCustomTSEnvVar, fmt.Sprintf(messageCustomTSEnvVar, string(e.Name), "tailscale"))
}
}
}
}
}
// We do not validate embedded fields (security context, resource

@ -36,8 +36,9 @@ func TestProxyClass(t *testing.T) {
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
Pod: &tsapi.Pod{
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
TailscaleContainer: &tsapi.Container{Env: []tsapi.Env{{Name: "FOO", Value: "BAR"}}},
},
},
},
@ -51,16 +52,17 @@ func TestProxyClass(t *testing.T) {
if err != nil {
t.Fatal(err)
}
fr := record.NewFakeRecorder(3) // bump this if you expect a test case to throw more events
cl := tstest.NewClock(tstest.ClockOpts{})
pcr := &ProxyClassReconciler{
Client: fc,
logger: zl.Sugar(),
clock: cl,
recorder: record.NewFakeRecorder(1),
recorder: fr,
}
expectReconciled(t, pcr, "", "test")
// 1. A valid ProxyClass resource gets its status updated to Ready.
expectReconciled(t, pcr, "", "test")
pc.Status.Conditions = append(pc.Status.Conditions, tsapi.ConnectorCondition{
Type: tsapi.ProxyClassready,
Status: metav1.ConditionTrue,
@ -69,7 +71,7 @@ func TestProxyClass(t *testing.T) {
LastTransitionTime: &metav1.Time{Time: cl.Now().Truncate(time.Second)},
})
expectEqual(t, fc, pc)
expectEqual(t, fc, pc, nil)
// 2. An invalid ProxyClass resource gets its status updated to Invalid.
pc.Spec.StatefulSet.Labels["foo"] = "?!someVal"
@ -79,5 +81,18 @@ func TestProxyClass(t *testing.T) {
expectReconciled(t, pcr, "", "test")
msg := `ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: "?!someVal": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc)
expectEqual(t, fc, pc, nil)
expectedEvent := "Warning ProxyClassInvalid ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: \"?!someVal\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"
expectEvents(t, fr, []string{expectedEvent})
// 2. An valid ProxyClass but with a Tailscale env vars set results in warning events.
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.StatefulSet.Labels = nil // unset invalid labels from the previous test
proxyClass.Spec.StatefulSet.Pod.TailscaleContainer.Env = []tsapi.Env{{Name: "TS_USERSPACE", Value: "true"}, {Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH"}, {Name: "EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS"}}
})
expectedEvents := []string{"Warning CustomTSEnvVar ProxyClass overrides the default value for TS_USERSPACE env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future.",
"Warning CustomTSEnvVar ProxyClass overrides the default value for EXPERIMENTAL_TS_CONFIGFILE_PATH env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future.",
"Warning CustomTSEnvVar ProxyClass overrides the default value for EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future."}
expectReconciled(t, pcr, "", "test")
expectEvents(t, fr, expectedEvents)
}

@ -29,11 +29,13 @@ import (
"sigs.k8s.io/yaml"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
kubeutils "tailscale.com/k8s-operator"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/net/netutil"
"tailscale.com/tailcfg"
"tailscale.com/types/opt"
"tailscale.com/types/ptr"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak"
)
@ -86,14 +88,11 @@ const (
// ensure that it does not get removed when a ProxyClass configuration
// is applied.
podAnnotationLastSetClusterIP = "tailscale.com/operator-last-set-cluster-ip"
podAnnotationLastSetClusterDNSName = "tailscale.com/operator-last-set-cluster-dns-name"
podAnnotationLastSetTailnetTargetIP = "tailscale.com/operator-last-set-ts-tailnet-target-ip"
podAnnotationLastSetTailnetTargetFQDN = "tailscale.com/operator-last-set-ts-tailnet-target-fqdn"
// podAnnotationLastSetConfigFileHash is sha256 hash of the current tailscaled configuration contents.
podAnnotationLastSetConfigFileHash = "tailscale.com/operator-last-set-config-file-hash"
// tailscaledConfigKey is the name of the key in proxy Secret Data that
// holds the tailscaled config contents.
tailscaledConfigKey = "tailscaled"
)
var (
@ -108,8 +107,9 @@ type tailscaleSTSConfig struct {
ParentResourceUID string
ChildResourceLabels map[string]string
ServeConfig *ipn.ServeConfig // if serve config is set, this is a proxy for Ingress
ClusterTargetIP string // ingress target
ServeConfig *ipn.ServeConfig // if serve config is set, this is a proxy for Ingress
ClusterTargetIP string // ingress target IP
ClusterTargetDNSName string // ingress target DNS name
// If set to true, operator should configure containerboot to forward
// cluster traffic via the proxy set up for Kubernetes Ingress.
ForwardClusterTrafficViaL7IngressProxy bool
@ -171,11 +171,11 @@ func (a *tailscaleSTSReconciler) Provision(ctx context.Context, logger *zap.Suga
return nil, fmt.Errorf("failed to reconcile headless service: %w", err)
}
secretName, tsConfigHash, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
secretName, tsConfigHash, configs, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
if err != nil {
return nil, fmt.Errorf("failed to create or get API key secret: %w", err)
}
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash)
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash, configs)
if err != nil {
return nil, fmt.Errorf("failed to reconcile statefulset: %w", err)
}
@ -288,7 +288,7 @@ func (a *tailscaleSTSReconciler) reconcileHeadlessService(ctx context.Context, l
return createOrUpdate(ctx, a.Client, a.operatorNamespace, hsvc, func(svc *corev1.Service) { svc.Spec = hsvc.Spec })
}
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (string, string, error) {
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (secretName, hash string, configs tailscaleConfigs, _ error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
// Hardcode a -0 suffix so that in future, if we support
@ -304,25 +304,23 @@ func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *
logger.Debugf("secret %s/%s already exists", secret.GetNamespace(), secret.GetName())
orig = secret.DeepCopy()
} else if !apierrors.IsNotFound(err) {
return "", "", err
return "", "", nil, err
}
var (
authKey, hash string
)
var authKey string
if orig == nil {
// Initially it contains only tailscaled config, but when the
// proxy starts, it will also store there the state, certs and
// ACME account key.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, stsC.ChildResourceLabels)
if err != nil {
return "", "", err
return "", "", nil, err
}
if sts != nil {
// StatefulSet exists, so we have already created the secret.
// If the secret is missing, they should delete the StatefulSet.
logger.Errorf("Tailscale proxy secret doesn't exist, but the corresponding StatefulSet %s/%s already does. Something is wrong, please delete the StatefulSet.", sts.GetNamespace(), sts.GetName())
return "", "", nil
return "", "", nil, nil
}
// Create API Key secret which is going to be used by the statefulset
// to authenticate with Tailscale.
@ -333,36 +331,66 @@ func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *
}
authKey, err = a.newAuthKey(ctx, tags)
if err != nil {
return "", "", err
return "", "", nil, err
}
}
confFileBytes, h, err := tailscaledConfig(stsC, authKey, orig)
configs, err := tailscaledConfig(stsC, authKey, orig)
if err != nil {
return "", "", nil, fmt.Errorf("error creating tailscaled config: %w", err)
}
hash, err = tailscaledConfigHash(configs)
if err != nil {
return "", "", fmt.Errorf("error creating tailscaled config: %w", err)
return "", "", nil, fmt.Errorf("error calculating hash of tailscaled configs: %w", err)
}
latest := tailcfg.CapabilityVersion(-1)
var latestConfig ipn.ConfigVAlpha
for key, val := range configs {
fn := kubeutils.TailscaledConfigFileNameForCap(key)
b, err := json.Marshal(val)
if err != nil {
return "", "", nil, fmt.Errorf("error marshalling tailscaled config: %w", err)
}
mak.Set(&secret.StringData, fn, string(b))
if key > latest {
latest = key
latestConfig = val
}
}
hash = h
mak.Set(&secret.StringData, tailscaledConfigKey, string(confFileBytes))
if stsC.ServeConfig != nil {
j, err := json.Marshal(stsC.ServeConfig)
if err != nil {
return "", "", err
return "", "", nil, err
}
mak.Set(&secret.StringData, "serve-config", string(j))
}
if orig != nil {
logger.Debugf("patching existing state Secret with values %s", secret.Data[tailscaledConfigKey])
logger.Debugf("patching the existing proxy Secret with tailscaled config %s", sanitizeConfigBytes(latestConfig))
if err := a.Patch(ctx, secret, client.MergeFrom(orig)); err != nil {
return "", "", err
return "", "", nil, err
}
} else {
logger.Debugf("creating new state Secret with authkey %s", secret.Data[tailscaledConfigKey])
logger.Debugf("creating a new Secret for the proxy with tailscaled config %s", sanitizeConfigBytes(latestConfig))
if err := a.Create(ctx, secret); err != nil {
return "", "", err
return "", "", nil, err
}
}
return secret.Name, hash, nil
return secret.Name, hash, configs, nil
}
// sanitizeConfigBytes returns ipn.ConfigVAlpha in string form with redacted
// auth key.
func sanitizeConfigBytes(c ipn.ConfigVAlpha) string {
if c.AuthKey != nil {
c.AuthKey = ptr.To("**redacted**")
}
sanitizedBytes, err := json.Marshal(c)
if err != nil {
return "invalid config"
}
return string(sanitizedBytes)
}
// DeviceInfo returns the device ID and hostname for the Tailscale device
@ -417,7 +445,7 @@ var proxyYaml []byte
//go:embed deploy/manifests/userspace-proxy.yaml
var userspaceProxyYaml []byte
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string) (*appsv1.StatefulSet, error) {
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string, configs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha) (*appsv1.StatefulSet, error) {
ss := new(appsv1.StatefulSet)
if sts.ServeConfig != nil && sts.ForwardClusterTrafficViaL7IngressProxy != true { // If forwarding cluster traffic via is required we need non-userspace + NET_ADMIN + forwarding
if err := yaml.Unmarshal(userspaceProxyYaml, &ss); err != nil {
@ -473,9 +501,15 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Value: proxySecret,
},
corev1.EnvVar{
// Old tailscaled config key is still used for backwards compatibility.
Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH",
Value: "/etc/tsconfig/tailscaled",
},
corev1.EnvVar{
// New style is in the form of cap-<capability-version>.hujson.
Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR",
Value: "/etc/tsconfig",
},
)
if sts.ForwardClusterTrafficViaL7IngressProxy {
container.Env = append(container.Env, corev1.EnvVar{
@ -485,18 +519,16 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
}
// Configure containeboot to run tailscaled with a configfile read from the state Secret.
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetConfigFileHash, tsConfigHash)
pod.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
configVolume := corev1.Volume{
Name: "tailscaledconfig",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: proxySecret,
Items: []corev1.KeyToPath{{
Key: tailscaledConfigKey,
Path: tailscaledConfigKey,
}},
},
},
})
}
pod.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, configVolume)
container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{
Name: "tailscaledconfig",
ReadOnly: true,
@ -518,6 +550,12 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Value: sts.ClusterTargetIP,
})
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetClusterIP, sts.ClusterTargetIP)
} else if sts.ClusterTargetDNSName != "" {
container.Env = append(container.Env, corev1.EnvVar{
Name: "TS_EXPERIMENTAL_DEST_DNS_NAME",
Value: sts.ClusterTargetDNSName,
})
mak.Set(&ss.Spec.Template.Annotations, podAnnotationLastSetClusterDNSName, sts.ClusterTargetDNSName)
} else if sts.TailnetTargetIP != "" {
container.Env = append(container.Env, corev1.EnvVar{
Name: "TS_TAILNET_TARGET_IP",
@ -545,10 +583,7 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: proxySecret,
Items: []corev1.KeyToPath{{
Key: "serve-config",
Path: "serve-config",
}},
Items: []corev1.KeyToPath{{Key: "serve-config", Path: "serve-config"}},
},
},
})
@ -556,7 +591,7 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
if sts.ProxyClass != "" {
logger.Debugf("configuring proxy resources with ProxyClass %s", sts.ProxyClass)
ss = applyProxyClassToStatefulSet(proxyClass, ss)
ss = applyProxyClassToStatefulSet(proxyClass, ss, sts, logger)
}
updateSS := func(s *appsv1.StatefulSet) {
s.Spec = ss.Spec
@ -587,8 +622,28 @@ func mergeStatefulSetLabelsOrAnnots(current, custom map[string]string, managed [
return custom
}
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet) *appsv1.StatefulSet {
if pc == nil || ss == nil || pc.Spec.StatefulSet == nil {
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet, stsCfg *tailscaleSTSConfig, logger *zap.SugaredLogger) *appsv1.StatefulSet {
if pc == nil || ss == nil {
return ss
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable {
if stsCfg.TailnetTargetFQDN == "" && stsCfg.TailnetTargetIP == "" && !stsCfg.ForwardClusterTrafficViaL7IngressProxy {
enableMetrics(ss, pc)
} else if stsCfg.ForwardClusterTrafficViaL7IngressProxy {
// TODO (irbekrm): fix this
// For Ingress proxies that have been configured with
// tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation, all cluster traffic is forwarded to the
// Ingress backend(s).
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
// TODO (irbekrm): fix this
// For egress proxies, currently all cluster traffic is forwarded to the tailnet target.
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
}
}
if pc.Spec.StatefulSet == nil {
return ss
}
@ -615,6 +670,7 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
ss.Spec.Template.Spec.ImagePullSecrets = wantsPod.ImagePullSecrets
ss.Spec.Template.Spec.NodeName = wantsPod.NodeName
ss.Spec.Template.Spec.NodeSelector = wantsPod.NodeSelector
ss.Spec.Template.Spec.Affinity = wantsPod.Affinity
ss.Spec.Template.Spec.Tolerations = wantsPod.Tolerations
// Update containers.
@ -626,6 +682,15 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
base.SecurityContext = overlay.SecurityContext
}
base.Resources = overlay.Resources
for _, e := range overlay.Env {
// Env vars configured via ProxyClass might override env
// vars that have been specified by the operator, i.e
// TS_USERSPACE. The intended behaviour is to allow this
// and in practice it works without explicitly removing
// the operator configured value here as a later value
// in the env var list overrides an earlier one.
base.Env = append(base.Env, corev1.EnvVar{Name: string(e.Name), Value: e.Value})
}
return base
}
for i, c := range ss.Spec.Template.Spec.Containers {
@ -645,41 +710,97 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet)
return ss
}
func enableMetrics(ss *appsv1.StatefulSet, pc *tsapi.ProxyClass) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
// Serve metrics on on <pod-ip>:9001/debug/metrics. If
// we didn't specify Pod IP here, the proxy would, in
// some cases, also listen to its Tailscale IP- we don't
// want folks to start relying on this side-effect as a
// feature.
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports, corev1.ContainerPort{Name: "metrics", Protocol: "TCP", HostPort: 9001, ContainerPort: 9001})
break
}
}
}
func readAuthKey(secret *corev1.Secret, key string) (*string, error) {
origConf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(secret.Data[key]), origConf); err != nil {
return nil, fmt.Errorf("error unmarshaling previous tailscaled config in %q: %w", key, err)
}
return origConf.AuthKey, nil
}
// tailscaledConfig takes a proxy config, a newly generated auth key if
// generated and a Secret with the previous proxy state and auth key and
// produces returns tailscaled configuration and a hash of that configuration.
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) ([]byte, string, error) {
conf := ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
Locked: "false",
Hostname: &stsC.Hostname,
// returns tailscaled configuration and a hash of that configuration.
//
// As of 2024-05-09 it also returns legacy tailscaled config without the
// later added NoStatefulFilter field to support proxies older than cap95.
// TODO (irbekrm): remove the legacy config once we no longer need to support
// versions older than cap94,
// https://tailscale.com/kb/1236/kubernetes-operator#operator-and-proxies
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) (tailscaleConfigs, error) {
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
AcceptRoutes: "false", // AcceptRoutes defaults to true
Locked: "false",
Hostname: &stsC.Hostname,
NoStatefulFiltering: "false",
}
// For egress proxies only, we need to ensure that stateful filtering is
// not in place so that traffic from cluster can be forwarded via
// Tailscale IPs.
if stsC.TailnetTargetFQDN != "" || stsC.TailnetTargetIP != "" {
conf.NoStatefulFiltering = "true"
}
if stsC.Connector != nil {
routes, err := netutil.CalcAdvertiseRoutes(stsC.Connector.routes, stsC.Connector.isExitNode)
if err != nil {
return nil, "", fmt.Errorf("error calculating routes: %w", err)
return nil, fmt.Errorf("error calculating routes: %w", err)
}
conf.AdvertiseRoutes = routes
}
if newAuthkey != "" {
conf.AuthKey = &newAuthkey
} else if oldSecret != nil && len(oldSecret.Data[tailscaledConfigKey]) > 0 { // write to StringData, read from Data as StringData is write-only
origConf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(oldSecret.Data[tailscaledConfigKey]), origConf); err != nil {
return nil, "", fmt.Errorf("error unmarshaling previous tailscaled config: %w", err)
} else if oldSecret != nil {
var err error
latest := tailcfg.CapabilityVersion(-1)
latestStr := ""
for k, data := range oldSecret.Data {
// write to StringData, read from Data as StringData is write-only
if len(data) == 0 {
continue
}
v, err := kubeutils.CapVerFromFileName(k)
if err != nil {
continue
}
if v > latest {
latestStr = k
latest = v
}
}
// Allow for configs that don't contain an auth key. Perhaps
// users have some mechanisms to delete them. Auth key is
// normally not needed after the initial login.
if latestStr != "" {
conf.AuthKey, err = readAuthKey(oldSecret, latestStr)
if err != nil {
return nil, err
}
}
conf.AuthKey = origConf.AuthKey
}
confFileBytes, err := json.Marshal(conf)
if err != nil {
return nil, "", fmt.Errorf("error marshaling tailscaled config : %w", err)
}
hash, err := hashBytes(confFileBytes)
if err != nil {
return nil, "", fmt.Errorf("error calculating config hash: %w", err)
}
return confFileBytes, hash, nil
capVerConfigs := make(map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha)
capVerConfigs[95] = *conf
// legacy config should not contain NoStatefulFiltering field.
conf.NoStatefulFiltering.Clear()
capVerConfigs[94] = *conf
return capVerConfigs, nil
}
// ptrObject is a type constraint for pointer types that implement
@ -689,7 +810,9 @@ type ptrObject[T any] interface {
*T
}
// hashBytes produces a hash for the provided bytes that is the same across
type tailscaleConfigs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha
// hashBytes produces a hash for the provided tailscaled config that is the same across
// different invocations of this code. We do not use the
// tailscale.com/deephash.Hash here because that produces a different hash for
// the same value in different tailscale builds. The hash we are producing here
@ -698,10 +821,13 @@ type ptrObject[T any] interface {
// thing that changed is operator version (the hash is also exposed to users via
// an annotation and might be confusing if it changes without the config having
// changed).
func hashBytes(b []byte) (string, error) {
h := sha256.New()
_, err := h.Write(b)
func tailscaledConfigHash(c tailscaleConfigs) (string, error) {
b, err := json.Marshal(c)
if err != nil {
return "", fmt.Errorf("error marshalling tailscaled configs: %w", err)
}
h := sha256.New()
if _, err = h.Write(b); err != nil {
return "", fmt.Errorf("error calculating hash: %w", err)
}
return fmt.Sprintf("%x", h.Sum(nil)), nil

@ -14,6 +14,7 @@ import (
"testing"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
@ -51,6 +52,10 @@ func Test_statefulSetNameBase(t *testing.T) {
}
func Test_applyProxyClassToStatefulSet(t *testing.T) {
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// Setup
proxyClassAllOpts := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
@ -66,6 +71,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
ImagePullSecrets: []corev1.LocalObjectReference{{Name: "docker-creds"}},
NodeName: "some-node",
NodeSelector: map[string]string{"beta.kubernetes.io/os": "linux"},
Affinity: &corev1.Affinity{NodeAffinity: &corev1.NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{}}},
Tolerations: []corev1.Toleration{{Key: "", Operator: "Exists"}},
TailscaleContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
@ -75,6 +81,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
Limits: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("1000m"), corev1.ResourceMemory: resource.MustParse("128Mi")},
Requests: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("500m"), corev1.ResourceMemory: resource.MustParse("64Mi")},
},
Env: []tsapi.Env{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}},
},
TailscaleInitContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
@ -85,6 +92,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
Limits: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("1000m"), corev1.ResourceMemory: resource.MustParse("128Mi")},
Requests: corev1.ResourceList{corev1.ResourceCPU: resource.MustParse("500m"), corev1.ResourceMemory: resource.MustParse("64Mi")},
},
Env: []tsapi.Env{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}},
},
},
},
@ -102,6 +110,12 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
},
},
}
proxyClassMetrics := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: true},
},
}
var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet
if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil {
t.Fatalf("unmarshaling userspace proxy template: %v", err)
@ -137,13 +151,16 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.InitContainers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.InitContainers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.Resources
wantSS.Spec.Template.Spec.InitContainers[0].Env = append(wantSS.Spec.Template.Spec.InitContainers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy())
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
@ -156,7 +173,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy())
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
@ -172,10 +189,12 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
wantSS.Spec.Template.Spec.NodeName = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeName
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy())
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
@ -187,10 +206,19 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy())
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 5. Test that a ProxyClass with metrics enabled gets correctly applied to a StatefulSet.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9001, HostPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassMetrics, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
}
func mergeMapKeys(a, b map[string]string) map[string]string {

@ -22,10 +22,16 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/net/dns/resolvconffile"
"tailscale.com/util/clientmetric"
"tailscale.com/util/set"
)
const (
resolvConfPath = "/etc/resolv.conf"
defaultClusterDomain = "cluster.local"
)
type ServiceReconciler struct {
client.Client
ssr *tailscaleSTSReconciler
@ -42,6 +48,8 @@ type ServiceReconciler struct {
managedEgressProxies set.Slice[types.UID]
recorder record.EventRecorder
tsNamespace string
}
var (
@ -82,7 +90,7 @@ func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get svc: %w", err)
}
targetIP := a.tailnetTargetAnnotation(svc)
targetIP := tailnetTargetAnnotation(svc)
targetFQDN := svc.Annotations[AnnotationTailnetTargetFQDN]
if !svc.DeletionTimestamp.IsZero() || !a.shouldExpose(svc) && targetIP == "" && targetFQDN == "" {
logger.Debugf("service is being deleted or is (no longer) referring to Tailscale ingress/egress, ensuring any created resources are cleaned up")
@ -200,11 +208,15 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
}
a.mu.Lock()
if a.shouldExpose(svc) {
if a.shouldExposeClusterIP(svc) {
sts.ClusterTargetIP = svc.Spec.ClusterIP
a.managedIngressProxies.Add(svc.UID)
gaugeIngressProxies.Set(int64(a.managedIngressProxies.Len()))
} else if ip := a.tailnetTargetAnnotation(svc); ip != "" {
} else if a.shouldExposeDNSName(svc) {
sts.ClusterTargetDNSName = svc.Spec.ExternalName
a.managedIngressProxies.Add(svc.UID)
gaugeIngressProxies.Set(int64(a.managedIngressProxies.Len()))
} else if ip := tailnetTargetAnnotation(svc); ip != "" {
sts.TailnetTargetIP = ip
a.managedEgressProxies.Add(svc.UID)
gaugeEgressProxies.Set(int64(a.managedEgressProxies.Len()))
@ -225,10 +237,8 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
}
if sts.TailnetTargetIP != "" || sts.TailnetTargetFQDN != "" {
// TODO (irbekrm): cluster.local is the default DNS name, but
// can be changed by users. Make this configurable or figure out
// how to discover the DNS name from within operator
headlessSvcName := hsvc.Name + "." + hsvc.Namespace + ".svc.cluster.local"
clusterDomain := retrieveClusterDomain(a.tsNamespace, logger)
headlessSvcName := hsvc.Name + "." + hsvc.Namespace + ".svc." + clusterDomain
if svc.Spec.ExternalName != headlessSvcName || svc.Spec.Type != corev1.ServiceTypeExternalName {
svc.Spec.ExternalName = headlessSvcName
svc.Spec.Selector = nil
@ -240,7 +250,7 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
if !a.hasLoadBalancerClass(svc) {
if !isTailscaleLoadBalancerService(svc, a.isDefaultLoadBalancer) {
logger.Debugf("service is not a LoadBalancer, so not updating ingress")
return nil
}
@ -297,25 +307,30 @@ func validateService(svc *corev1.Service) []string {
}
func (a *ServiceReconciler) shouldExpose(svc *corev1.Service) bool {
// Headless services can't be exposed, since there is no ClusterIP to
// forward to.
return a.shouldExposeClusterIP(svc) || a.shouldExposeDNSName(svc)
}
func (a *ServiceReconciler) shouldExposeDNSName(svc *corev1.Service) bool {
return hasExposeAnnotation(svc) && svc.Spec.Type == corev1.ServiceTypeExternalName && svc.Spec.ExternalName != ""
}
func (a *ServiceReconciler) shouldExposeClusterIP(svc *corev1.Service) bool {
if svc.Spec.ClusterIP == "" || svc.Spec.ClusterIP == "None" {
return false
}
return a.hasLoadBalancerClass(svc) || a.hasExposeAnnotation(svc)
return isTailscaleLoadBalancerService(svc, a.isDefaultLoadBalancer) || hasExposeAnnotation(svc)
}
func (a *ServiceReconciler) hasLoadBalancerClass(svc *corev1.Service) bool {
func isTailscaleLoadBalancerService(svc *corev1.Service, isDefaultLoadBalancer bool) bool {
return svc != nil &&
svc.Spec.Type == corev1.ServiceTypeLoadBalancer &&
(svc.Spec.LoadBalancerClass != nil && *svc.Spec.LoadBalancerClass == "tailscale" ||
svc.Spec.LoadBalancerClass == nil && a.isDefaultLoadBalancer)
svc.Spec.LoadBalancerClass == nil && isDefaultLoadBalancer)
}
// hasExposeAnnotation reports whether Service has the tailscale.com/expose
// annotation set
func (a *ServiceReconciler) hasExposeAnnotation(svc *corev1.Service) bool {
func hasExposeAnnotation(svc *corev1.Service) bool {
return svc != nil && svc.Annotations[AnnotationExpose] == "true"
}
@ -323,7 +338,7 @@ func (a *ServiceReconciler) hasExposeAnnotation(svc *corev1.Service) bool {
// annotation or of the deprecated tailscale.com/ts-tailnet-target-ip
// annotation. If neither is set, it returns an empty string. If both are set,
// it returns the value of the new annotation.
func (a *ServiceReconciler) tailnetTargetAnnotation(svc *corev1.Service) string {
func tailnetTargetAnnotation(svc *corev1.Service) string {
if svc == nil {
return ""
}
@ -344,3 +359,51 @@ func proxyClassIsReady(ctx context.Context, name string, cl client.Client) (bool
}
return tsoperator.ProxyClassIsReady(proxyClass), nil
}
// retrieveClusterDomain determines and retrieves cluster domain i.e
// (cluster.local) in which this Pod is running by parsing search domains in
// /etc/resolv.conf. If an error is encountered at any point during the process,
// defaults cluster domain to 'cluster.local'.
func retrieveClusterDomain(namespace string, logger *zap.SugaredLogger) string {
logger.Infof("attempting to retrieve cluster domain..")
conf, err := resolvconffile.ParseFile(resolvConfPath)
if err != nil {
// Vast majority of clusters use the cluster.local domain, so it
// is probably better to fall back to that than error out.
logger.Infof("[unexpected] error parsing /etc/resolv.conf to determine cluster domain, defaulting to 'cluster.local'.")
return defaultClusterDomain
}
return clusterDomainFromResolverConf(conf, namespace, logger)
}
// clusterDomainFromResolverConf attempts to retrieve cluster domain from the provided resolver config.
// It expects the first three search domains in the resolver config to be be ['<namespace>.svc.<cluster-domain>, svc.<cluster-domain>, <cluster-domain>, ...]
// If the first three domains match the expected structure, it returns the third.
// If the domains don't match the expected structure or an error is encountered, it defaults to 'cluster.local' domain.
func clusterDomainFromResolverConf(conf *resolvconffile.Config, namespace string, logger *zap.SugaredLogger) string {
if len(conf.SearchDomains) < 3 {
logger.Infof("[unexpected] resolver config contains only %d search domains, at least three expected.\nDefaulting cluster domain to 'cluster.local'.")
return defaultClusterDomain
}
first := conf.SearchDomains[0]
if !strings.HasPrefix(string(first), namespace+".svc") {
logger.Infof("[unexpected] first search domain in resolver config is %s; expected %s.\nDefaulting cluster domain to 'cluster.local'.", first, namespace+".svc.<cluster-domain>")
return defaultClusterDomain
}
second := conf.SearchDomains[1]
if !strings.HasPrefix(string(second), "svc") {
logger.Infof("[unexpected] second search domain in resolver config is %s; expected 'svc.<cluster-domain>'.\nDefaulting cluster domain to 'cluster.local'.", second)
return defaultClusterDomain
}
// Trim the trailing dot for backwards compatibility purposes as the
// cluster domain was previously hardcoded to 'cluster.local' without a
// trailing dot.
probablyClusterDomain := strings.TrimPrefix(second.WithoutTrailingDot(), "svc.")
third := conf.SearchDomains[2]
if !strings.EqualFold(third.WithoutTrailingDot(), probablyClusterDomain) {
logger.Infof("[unexpected] expected resolver config to contain serch domains <namespace>.svc.<cluster-domain>, svc.<cluster-domain>, <cluster-domain>; got %s %s %s\n. Defaulting cluster domain to 'cluster.local'.", first, second, third)
return defaultClusterDomain
}
logger.Infof("Cluster domain %q extracted from resolver config", probablyClusterDomain)
return probablyClusterDomain
}

@ -15,11 +15,13 @@ import (
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/client/tailscale"
@ -42,6 +44,7 @@ type configOpts struct {
tailnetTargetIP string
tailnetTargetFQDN string
clusterTargetIP string
clusterTargetDNS string
subnetRoutes string
isExitNode bool
confFileHash string
@ -52,6 +55,10 @@ type configOpts struct {
func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{
Name: "tailscale",
Image: "tailscale/tailscale",
@ -60,6 +67,7 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR", Value: "/etc/tsconfig"},
},
SecurityContext: &corev1.SecurityContext{
Capabilities: &corev1.Capabilities{
@ -82,12 +90,6 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: opts.secretName,
Items: []corev1.KeyToPath{
{
Key: "tailscaled",
Path: "tailscaled",
},
},
},
},
},
@ -97,7 +99,9 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
ReadOnly: true,
MountPath: "/etc/tsconfig",
}}
annots["tailscale.com/operator-last-set-config-file-hash"] = opts.confFileHash
if opts.confFileHash != "" {
annots["tailscale.com/operator-last-set-config-file-hash"] = opts.confFileHash
}
if opts.firewallMode != "" {
tsContainer.Env = append(tsContainer.Env, corev1.EnvVar{
Name: "TS_DEBUG_FIREWALL_MODE",
@ -123,15 +127,19 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
Value: opts.clusterTargetIP,
})
annots["tailscale.com/operator-last-set-cluster-ip"] = opts.clusterTargetIP
} else if opts.clusterTargetDNS != "" {
tsContainer.Env = append(tsContainer.Env, corev1.EnvVar{
Name: "TS_EXPERIMENTAL_DEST_DNS_NAME",
Value: opts.clusterTargetDNS,
})
annots["tailscale.com/operator-last-set-cluster-dns-name"] = opts.clusterTargetDNS
}
if opts.serveConfig != nil {
tsContainer.Env = append(tsContainer.Env, corev1.EnvVar{
Name: "TS_SERVE_CONFIG",
Value: "/etc/tailscaled/serve-config",
})
volumes = append(volumes, corev1.Volume{
Name: "serve-config", VolumeSource: corev1.VolumeSource{Secret: &corev1.SecretVolumeSource{SecretName: opts.secretName, Items: []corev1.KeyToPath{{Path: "serve-config", Key: "serve-config"}}}},
})
volumes = append(volumes, corev1.Volume{Name: "serve-config", VolumeSource: corev1.VolumeSource{Secret: &corev1.SecretVolumeSource{SecretName: opts.secretName, Items: []corev1.KeyToPath{{Key: "serve-config", Path: "serve-config"}}}}})
tsContainer.VolumeMounts = append(tsContainer.VolumeMounts, corev1.VolumeMount{Name: "serve-config", ReadOnly: true, MountPath: "/etc/tailscaled"})
}
ss := &appsv1.StatefulSet{
@ -174,8 +182,8 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
{
Name: "sysctler",
Image: "tailscale/tailscale",
Command: []string{"/bin/sh"},
Args: []string{"-c", "sysctl -w net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1"},
Command: []string{"/bin/sh", "-c"},
Args: []string{"sysctl -w net.ipv4.ip_forward=1 && if sysctl net.ipv6.conf.all.forwarding; then sysctl -w net.ipv6.conf.all.forwarding=1; fi"},
SecurityContext: &corev1.SecurityContext{
Privileged: ptr.To(true),
},
@ -195,20 +203,26 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err)
}
return applyProxyClassToStatefulSet(proxyClass, ss)
return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
}
return ss
}
func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
t.Helper()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
tsContainer := corev1.Container{
Name: "tailscale",
Image: "tailscale/tailscale",
Env: []corev1.EnvVar{
{Name: "TS_USERSPACE", Value: "true"},
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR", Value: "/etc/tsconfig"},
{Name: "TS_SERVE_CONFIG", Value: "/etc/tailscaled/serve-config"},
},
ImagePullPolicy: "Always",
@ -223,20 +237,12 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: opts.secretName,
Items: []corev1.KeyToPath{
{
Key: "tailscaled",
Path: "tailscaled",
},
},
},
},
},
{Name: "serve-config",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{SecretName: opts.secretName,
Items: []corev1.KeyToPath{{Key: "serve-config", Path: "serve-config"}}}},
},
Secret: &corev1.SecretVolumeSource{SecretName: opts.secretName, Items: []corev1.KeyToPath{{Key: "serve-config", Path: "serve-config"}}}}},
}
ss := &appsv1.StatefulSet{
TypeMeta: metav1.TypeMeta{
@ -269,7 +275,6 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
"tailscale.com/parent-resource-type": opts.parentType,
"app": "1234-UID",
},
Annotations: map[string]string{"tailscale.com/operator-last-set-config-file-hash": opts.confFileHash},
},
Spec: corev1.PodSpec{
ServiceAccountName: "proxies",
@ -280,6 +285,10 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
},
},
}
ss.Spec.Template.Annotations = map[string]string{}
if opts.confFileHash != "" {
ss.Spec.Template.Annotations["tailscale.com/operator-last-set-config-file-hash"] = opts.confFileHash
}
// If opts.proxyClass is set, retrieve the ProxyClass and apply
// configuration from that to the StatefulSet.
if opts.proxyClass != "" {
@ -288,7 +297,7 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
if err := cl.Get(context.Background(), types.NamespacedName{Name: opts.proxyClass}, proxyClass); err != nil {
t.Fatalf("error getting ProxyClass: %v", err)
}
return applyProxyClassToStatefulSet(proxyClass, ss)
return applyProxyClassToStatefulSet(proxyClass, ss, new(tailscaleSTSConfig), zl.Sugar())
}
return ss
}
@ -339,11 +348,12 @@ func expectedSecret(t *testing.T, opts configOpts) *corev1.Secret {
mak.Set(&s.StringData, "serve-config", string(serveConfigBs))
}
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
Hostname: &opts.hostname,
Locked: "false",
AuthKey: ptr.To("secret-authkey"),
Version: "alpha0",
AcceptDNS: "false",
Hostname: &opts.hostname,
Locked: "false",
AuthKey: ptr.To("secret-authkey"),
AcceptRoutes: "false",
}
var routes []netip.Prefix
if opts.subnetRoutes != "" || opts.isExitNode {
@ -364,7 +374,17 @@ func expectedSecret(t *testing.T, opts configOpts) *corev1.Secret {
if err != nil {
t.Fatalf("error marshalling tailscaled config")
}
if opts.tailnetTargetFQDN != "" || opts.tailnetTargetIP != "" {
conf.NoStatefulFiltering = "true"
} else {
conf.NoStatefulFiltering = "false"
}
bn, err := json.Marshal(conf)
if err != nil {
t.Fatalf("error marshalling tailscaled config")
}
mak.Set(&s.StringData, "tailscaled", string(b))
mak.Set(&s.StringData, "cap-95.hujson", string(bn))
labels := map[string]string{
"tailscale.com/managed": "true",
"tailscale.com/parent-resource": "test",
@ -433,7 +453,13 @@ func mustUpdateStatus[T any, O ptrObject[T]](t *testing.T, client client.Client,
}
}
func expectEqual[T any, O ptrObject[T]](t *testing.T, client client.Client, want O) {
// expectEqual accepts a Kubernetes object and a Kubernetes client. It tests
// whether an object with equivalent contents can be retrieved by the passed
// client. If you want to NOT test some object fields for equality, ensure that
// they are not present in the passed object and use the modify func to remove
// them from the cluster object. If no such modifications are needed, you can
// pass nil in place of the modify function.
func expectEqual[T any, O ptrObject[T]](t *testing.T, client client.Client, want O, modifier func(O)) {
t.Helper()
got := O(new(T))
if err := client.Get(context.Background(), types.NamespacedName{
@ -447,6 +473,9 @@ func expectEqual[T any, O ptrObject[T]](t *testing.T, client client.Client, want
// so just remove it from both got and want.
got.SetResourceVersion("")
want.SetResourceVersion("")
if modifier != nil {
modifier(got)
}
if diff := cmp.Diff(got, want); diff != "" {
t.Fatalf("unexpected object (-got +want):\n%s", diff)
}
@ -500,6 +529,34 @@ func expectRequeue(t *testing.T, sr reconcile.Reconciler, ns, name string) {
}
}
// expectEvents accepts a test recorder and a list of events, tests that expected
// events are sent down the recorder's channel. Waits for 5s for each event.
func expectEvents(t *testing.T, rec *record.FakeRecorder, wantsEvents []string) {
t.Helper()
// Events are not expected to arrive in order.
seenEvents := make([]string, 0)
for range len(wantsEvents) {
timer := time.NewTimer(time.Second * 5)
defer timer.Stop()
select {
case gotEvent := <-rec.Events:
found := false
for _, wantEvent := range wantsEvents {
if wantEvent == gotEvent {
found = true
seenEvents = append(seenEvents, gotEvent)
break
}
}
if !found {
t.Errorf("got unexpected event %q, expected events: %+#v", gotEvent, wantsEvents)
}
case <-timer.C:
t.Errorf("timeout waiting for an event, wants events %#+v, got events %+#v", wantsEvents, seenEvents)
}
}
}
type fakeTSClient struct {
sync.Mutex
keyRequests []tailscale.KeyCapabilities
@ -543,3 +600,11 @@ func (c *fakeTSClient) Deleted() []string {
defer c.Unlock()
return c.deleted
}
// removeHashAnnotation can be used to remove declarative tailscaled config hash
// annotation from proxy StatefulSets to make the tests more maintainable (so
// that we don't have to change the annotation in each test case after any
// change to the configfile contents).
func removeHashAnnotation(sts *appsv1.StatefulSet) {
delete(sts.Spec.Template.Annotations, podAnnotationLastSetConfigFileHash)
}

@ -314,7 +314,7 @@ func mustMakeNamesByAddr() map[netip.Addr]string {
seen := make(map[string]bool)
namesByAddr := make(map[netip.Addr]string)
retry:
for i := 0; i < 10; i++ {
for i := range 10 {
clear(seen)
clear(namesByAddr)
for _, d := range m.Devices {
@ -354,7 +354,7 @@ func fieldPrefix(s string, n int) string {
}
func appendRepeatByte(b []byte, c byte, n int) []byte {
for i := 0; i < n; i++ {
for range n {
b = append(b, c)
}
return b

@ -28,7 +28,6 @@ import (
"tailscale.com/metrics"
"tailscale.com/tsnet"
"tailscale.com/tsweb"
"tailscale.com/types/logger"
)
var (
@ -58,8 +57,6 @@ func main() {
ts := &tsnet.Server{
Dir: *tailscaleDir,
Hostname: *hostname,
// Make the stdout logs a clean audit log of connections.
Logf: logger.Discard,
}
if os.Getenv("TS_AUTHKEY") == "" {

@ -88,7 +88,7 @@ func main() {
go func() {
// wait for tailscale to start before trying to fetch cert names
for i := 0; i < 60; i++ {
for range 60 {
st, err := localClient.Status(context.Background())
if err != nil {
log.Printf("error retrieving tailscale status; retrying: %v", err)

@ -8,6 +8,7 @@ import (
"encoding/json"
"flag"
"fmt"
"log"
"net"
"net/http/httptest"
"net/netip"
@ -24,6 +25,7 @@ import (
"tailscale.com/tsnet"
"tailscale.com/tstest/integration"
"tailscale.com/tstest/integration/testcontrol"
"tailscale.com/tstest/nettest"
"tailscale.com/types/appctype"
"tailscale.com/types/ipproto"
"tailscale.com/types/key"
@ -98,8 +100,8 @@ func startNode(t *testing.T, ctx context.Context, controlURL, hostname string) (
Store: new(mem.Store),
Ephemeral: true,
}
if !*verboseNodes {
s.Logf = logger.Discard
if *verboseNodes {
s.Logf = log.Printf
}
t.Cleanup(func() { s.Close() })
@ -111,6 +113,7 @@ func startNode(t *testing.T, ctx context.Context, controlURL, hostname string) (
}
func TestSNIProxyWithNetmapConfig(t *testing.T) {
nettest.SkipIfNoNetwork(t)
c, controlURL := startControl(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
@ -158,7 +161,7 @@ func TestSNIProxyWithNetmapConfig(t *testing.T) {
t.Fatal(err)
}
gotConfigured := false
for i := 0; i < 100; i++ {
for range 100 {
s, err := l.StatusWithoutPeers(ctx)
if err != nil {
t.Fatal(err)
@ -189,6 +192,7 @@ func TestSNIProxyWithNetmapConfig(t *testing.T) {
}
func TestSNIProxyWithFlagConfig(t *testing.T) {
nettest.SkipIfNoNetwork(t)
_, controlURL := startControl(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

@ -2,7 +2,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus
💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus
github.com/google/uuid from tailscale.com/tsweb
github.com/google/uuid from tailscale.com/util/fastuuid
💣 github.com/prometheus/client_golang/prometheus from tailscale.com/tsweb/promvarz
github.com/prometheus/client_golang/prometheus/internal from github.com/prometheus/client_golang/prometheus
github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+
@ -20,6 +20,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
google.golang.org/protobuf/internal/descfmt from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/descopts from google.golang.org/protobuf/internal/filedesc+
google.golang.org/protobuf/internal/detrand from google.golang.org/protobuf/internal/descfmt+
google.golang.org/protobuf/internal/editiondefaults from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/encoding/defval from google.golang.org/protobuf/internal/encoding/tag+
google.golang.org/protobuf/internal/encoding/messageset from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/encoding/tag from google.golang.org/protobuf/internal/impl
@ -65,6 +66,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
tailscale.com/util/ctxkey from tailscale.com/tsweb+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/tailcfg
tailscale.com/util/fastuuid from tailscale.com/tsweb
tailscale.com/util/lineread from tailscale.com/version/distro
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/slicesx from tailscale.com/tailcfg
@ -151,6 +153,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from math/big+
math/rand/v2 from tailscale.com/util/fastuuid
mime from github.com/prometheus/common/expfmt+
mime/multipart from net/http
mime/quotedprintable from mime/multipart

@ -17,7 +17,7 @@ var bugReportCmd = &ffcli.Command{
Name: "bugreport",
Exec: runBugReport,
ShortHelp: "Print a shareable identifier to help diagnose issues",
ShortUsage: "bugreport [note]",
ShortUsage: "tailscale bugreport [note]",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("bugreport")
fs.BoolVar(&bugReportArgs.diagnose, "diagnose", false, "run additional in-depth checks")

@ -28,7 +28,7 @@ var certCmd = &ffcli.Command{
Name: "cert",
Exec: runCert,
ShortHelp: "Get TLS certs",
ShortUsage: "cert [flags] <domain>",
ShortUsage: "tailscale cert [flags] <domain>",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("cert")
fs.StringVar(&certArgs.certFile, "cert-file", "", "output cert file or \"-\" for stdout; defaults to DOMAIN.crt if --cert-file and --key-file are both unset")

@ -14,13 +14,15 @@ import (
"log"
"os"
"runtime"
"slices"
"strings"
"sync"
"text/tabwriter"
"github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/client/tailscale"
"tailscale.com/cmd/tailscale/cli/ffcomplete"
"tailscale.com/envknob"
"tailscale.com/paths"
"tailscale.com/version/distro"
@ -76,7 +78,9 @@ func CleanUpArgs(args []string) []string {
return out
}
var localClient tailscale.LocalClient
var localClient = tailscale.LocalClient{
Socket: paths.DefaultTailscaledSocket(),
}
// Run runs the CLI. The args do not include the binary name.
func Run(args []string) (err error) {
@ -93,8 +97,68 @@ func Run(args []string) (err error) {
})
})
rootCmd := newRootCmd()
if err := rootCmd.Parse(args); err != nil {
if errors.Is(err, flag.ErrHelp) {
return nil
}
if noexec := (ffcli.NoExecError{}); errors.As(err, &noexec) {
// When the user enters an unknown subcommand, ffcli tries to run
// the closest valid parent subcommand with everything else as args,
// returning NoExecError if it doesn't have an Exec function.
cmd := noexec.Command
args := cmd.FlagSet.Args()
if len(cmd.Subcommands) > 0 {
if len(args) > 0 {
return fmt.Errorf("%s: unknown subcommand: %s", fullCmd(rootCmd, cmd), args[0])
}
subs := make([]string, 0, len(cmd.Subcommands))
for _, sub := range cmd.Subcommands {
subs = append(subs, sub.Name)
}
return fmt.Errorf("%s: missing subcommand: %s", fullCmd(rootCmd, cmd), strings.Join(subs, ", "))
}
}
return err
}
if envknob.Bool("TS_DUMP_HELP") {
walkCommands(rootCmd, func(w cmdWalk) bool {
fmt.Println("===")
// UsageFuncs are typically called during Command.Run which ensures
// FlagSet is not nil.
c := w.Command
if c.FlagSet == nil {
c.FlagSet = flag.NewFlagSet(c.Name, flag.ContinueOnError)
}
if c.UsageFunc != nil {
fmt.Println(c.UsageFunc(c))
} else {
fmt.Println(ffcli.DefaultUsageFunc(c))
}
return true
})
return
}
err = rootCmd.Run(context.Background())
if tailscale.IsAccessDeniedError(err) && os.Getuid() != 0 && runtime.GOOS != "windows" {
return fmt.Errorf("%v\n\nUse 'sudo tailscale %s' or 'tailscale up --operator=$USER' to not require root.", err, strings.Join(args, " "))
}
if errors.Is(err, flag.ErrHelp) {
return nil
}
return err
}
func newRootCmd() *ffcli.Command {
rootfs := newFlagSet("tailscale")
rootfs.StringVar(&rootArgs.socket, "socket", paths.DefaultTailscaledSocket(), "path to tailscaled socket")
rootfs.Func("socket", "path to tailscaled socket", func(s string) error {
localClient.Socket = s
localClient.UseSocketOnly = true
return nil
})
rootfs.Lookup("socket").DefValue = localClient.Socket
rootCmd := &ffcli.Command{
Name: "tailscale",
@ -129,59 +193,35 @@ change in the future.
certCmd,
netlockCmd,
licensesCmd,
exitNodeCmd,
exitNodeCmd(),
updateCmd,
whoisCmd,
},
FlagSet: rootfs,
Exec: func(context.Context, []string) error { return flag.ErrHelp },
UsageFunc: usageFunc,
}
if envknob.UseWIPCode() {
rootCmd.Subcommands = append(rootCmd.Subcommands,
debugCmd,
driveCmd,
idTokenCmd,
)
},
FlagSet: rootfs,
Exec: func(ctx context.Context, args []string) error {
if len(args) > 0 {
return fmt.Errorf("tailscale: unknown subcommand: %s", args[0])
}
return flag.ErrHelp
},
}
// Don't advertise these commands, but they're still explicitly available.
switch {
case slices.Contains(args, "debug"):
rootCmd.Subcommands = append(rootCmd.Subcommands, debugCmd)
case slices.Contains(args, "share"):
rootCmd.Subcommands = append(rootCmd.Subcommands, shareCmd)
}
if runtime.GOOS == "linux" && distro.Get() == distro.Synology {
rootCmd.Subcommands = append(rootCmd.Subcommands, configureHostCmd)
}
for _, c := range rootCmd.Subcommands {
if c.UsageFunc == nil {
c.UsageFunc = usageFunc
}
}
if err := rootCmd.Parse(args); err != nil {
if errors.Is(err, flag.ErrHelp) {
return nil
}
return err
}
localClient.Socket = rootArgs.socket
rootfs.Visit(func(f *flag.Flag) {
if f.Name == "socket" {
localClient.UseSocketOnly = true
walkCommands(rootCmd, func(w cmdWalk) bool {
if w.UsageFunc == nil {
w.UsageFunc = usageFunc
}
return true
})
err = rootCmd.Run(context.Background())
if tailscale.IsAccessDeniedError(err) && os.Getuid() != 0 && runtime.GOOS != "windows" {
return fmt.Errorf("%v\n\nUse 'sudo tailscale %s' or 'tailscale up --operator=$USER' to not require root.", err, strings.Join(args, " "))
}
if errors.Is(err, flag.ErrHelp) {
return nil
}
return err
ffcomplete.Inject(rootCmd, func(c *ffcli.Command) { c.LongHelp = hidden + c.LongHelp }, usageFunc)
return rootCmd
}
func fatalf(format string, a ...any) {
@ -196,8 +236,57 @@ func fatalf(format string, a ...any) {
// Fatalf, if non-nil, is used instead of log.Fatalf.
var Fatalf func(format string, a ...any)
var rootArgs struct {
socket string
type cmdWalk struct {
*ffcli.Command
parents []*ffcli.Command
}
func (w cmdWalk) Path() string {
if len(w.parents) == 0 {
return w.Name
}
var sb strings.Builder
for _, p := range w.parents {
sb.WriteString(p.Name)
sb.WriteString(" ")
}
sb.WriteString(w.Name)
return sb.String()
}
// walkCommands calls f for root and all of its nested subcommands until f
// returns false or all have been visited.
func walkCommands(root *ffcli.Command, f func(w cmdWalk) (more bool)) {
var walk func(cmd *ffcli.Command, parents []*ffcli.Command, f func(cmdWalk) bool) bool
walk = func(cmd *ffcli.Command, parents []*ffcli.Command, f func(cmdWalk) bool) bool {
if !f(cmdWalk{cmd, parents}) {
return false
}
parents = append(parents, cmd)
for _, sub := range cmd.Subcommands {
if !walk(sub, parents, f) {
return false
}
}
return true
}
walk(root, nil, f)
}
// fullCmd returns the full "tailscale ... cmd" invocation for a subcommand.
func fullCmd(root, cmd *ffcli.Command) (full string) {
walkCommands(root, func(w cmdWalk) bool {
if w.Command == cmd {
full = w.Path()
return false
}
return true
})
if full == "" {
return cmd.Name
}
return full
}
// usageFuncNoDefaultValues is like usageFunc but doesn't print default values.
@ -209,25 +298,36 @@ func usageFunc(c *ffcli.Command) string {
return usageFuncOpt(c, true)
}
// hidden is the prefix that hides subcommands and flags from --help output when
// found at the start of the subcommand's LongHelp or flag's Usage.
const hidden = "HIDDEN: "
func usageFuncOpt(c *ffcli.Command, withDefaults bool) string {
var b strings.Builder
if c.ShortHelp != "" {
fmt.Fprintf(&b, "%s\n\n", c.ShortHelp)
}
fmt.Fprintf(&b, "USAGE\n")
if c.ShortUsage != "" {
fmt.Fprintf(&b, " %s\n", c.ShortUsage)
fmt.Fprintf(&b, " %s\n", strings.ReplaceAll(c.ShortUsage, "\n", "\n "))
} else {
fmt.Fprintf(&b, " %s\n", c.Name)
}
fmt.Fprintf(&b, "\n")
if c.LongHelp != "" {
fmt.Fprintf(&b, "%s\n\n", c.LongHelp)
if help := strings.TrimPrefix(c.LongHelp, hidden); help != "" {
fmt.Fprintf(&b, "%s\n\n", help)
}
if len(c.Subcommands) > 0 {
fmt.Fprintf(&b, "SUBCOMMANDS\n")
tw := tabwriter.NewWriter(&b, 0, 2, 2, ' ', 0)
for _, subcommand := range c.Subcommands {
if strings.HasPrefix(subcommand.LongHelp, hidden) {
continue
}
fmt.Fprintf(tw, " %s\t%s\n", subcommand.Name, subcommand.ShortHelp)
}
tw.Flush()
@ -240,7 +340,7 @@ func usageFuncOpt(c *ffcli.Command, withDefaults bool) string {
c.FlagSet.VisitAll(func(f *flag.Flag) {
var s string
name, usage := flag.UnquoteUsage(f)
if strings.HasPrefix(usage, "HIDDEN: ") {
if strings.HasPrefix(usage, hidden) {
return
}
if isBoolFlag(f) {
@ -287,3 +387,17 @@ func countFlags(fs *flag.FlagSet) (n int) {
fs.VisitAll(func(*flag.Flag) { n++ })
return n
}
// colorableOutput returns a colorable writer if stdout is a terminal (not, say,
// redirected to a file or pipe), the Stdout writer is os.Stdout (we're not
// embedding the CLI in wasm or a mobile app), and NO_COLOR is not set (see
// https://no-color.org/). If any of those is not the case, ok is false
// and w is Stdout.
func colorableOutput() (w io.Writer, ok bool) {
if Stdout != os.Stdout ||
os.Getenv("NO_COLOR") != "" ||
!isatty.IsTerminal(os.Stdout.Fd()) {
return Stdout, false
}
return colorable.NewColorableStdout(), true
}

@ -16,6 +16,7 @@ import (
qt "github.com/frankban/quicktest"
"github.com/google/go-cmp/cmp"
"tailscale.com/envknob"
"tailscale.com/health/healthmsg"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@ -28,6 +29,110 @@ import (
"tailscale.com/version/distro"
)
func TestPanicIfAnyEnvCheckedInInit(t *testing.T) {
envknob.PanicIfAnyEnvCheckedInInit()
}
func TestShortUsage(t *testing.T) {
t.Setenv("TAILSCALE_USE_WIP_CODE", "1")
if !envknob.UseWIPCode() {
t.Fatal("expected envknob.UseWIPCode() to be true")
}
walkCommands(newRootCmd(), func(w cmdWalk) bool {
c, parents := w.Command, w.parents
// Words that we expect to be in the usage.
words := make([]string, len(parents)+1)
for i, parent := range parents {
words[i] = parent.Name
}
words[len(parents)] = c.Name
// Check the ShortHelp starts with a capital letter.
if prefix, help := trimPrefixes(c.ShortHelp, "HIDDEN: ", "[ALPHA] ", "[BETA] "); help != "" {
if 'a' <= help[0] && help[0] <= 'z' {
if len(help) > 20 {
help = help[:20] + "…"
}
caphelp := string(help[0]-'a'+'A') + help[1:]
t.Errorf("command: %s: ShortHelp %q should start with a capital letter %q", strings.Join(words, " "), prefix+help, prefix+caphelp)
}
}
// Check all words appear in the usage.
usage := c.ShortUsage
for _, word := range words {
var ok bool
usage, ok = cutWord(usage, word)
if !ok {
full := strings.Join(words, " ")
t.Errorf("command: %s: usage %q should contain the full path %q", full, c.ShortUsage, full)
return true
}
}
return true
})
}
func trimPrefixes(full string, prefixes ...string) (trimmed, remaining string) {
s := full
start:
for _, p := range prefixes {
var ok bool
s, ok = strings.CutPrefix(s, p)
if ok {
goto start
}
}
return full[:len(full)-len(s)], s
}
// cutWord("tailscale debug scale 123", "scale") returns (" 123", true).
func cutWord(s, w string) (after string, ok bool) {
var p string
for {
p, s, ok = strings.Cut(s, w)
if !ok {
return "", false
}
if p != "" && isWordChar(p[len(p)-1]) {
continue
}
if s != "" && isWordChar(s[0]) {
continue
}
return s, true
}
}
func isWordChar(r byte) bool {
return r == '_' ||
('0' <= r && r <= '9') ||
('A' <= r && r <= 'Z') ||
('a' <= r && r <= 'z')
}
func TestCutWord(t *testing.T) {
tests := []struct {
in string
word string
out string
ok bool
}{
{"tailscale debug", "debug", "", true},
{"tailscale debug", "bug", "", false},
{"tailscale debug", "tail", "", false},
{"tailscale debug scaley scale 123", "scale", " 123", true},
}
for _, test := range tests {
out, ok := cutWord(test.in, test.word)
if out != test.out || ok != test.ok {
t.Errorf("cutWord(%q, %q) = (%q, %t), wanted (%q, %t)", test.in, test.word, out, ok, test.out, test.ok)
}
}
}
// geese is a collection of gooses. It need not be complete.
// But it should include anything handled specially (e.g. linux, windows)
// and at least one thing that's not (darwin, freebsd).
@ -550,12 +655,13 @@ func TestPrefsFromUpArgs(t *testing.T) {
goos: "linux",
args: upArgsFromOSArgs("linux"),
want: &ipn.Prefs{
ControlURL: ipn.DefaultControlURL,
WantRunning: true,
NoSNAT: false,
NetfilterMode: preftype.NetfilterOn,
CorpDNS: true,
AllowSingleHosts: true,
ControlURL: ipn.DefaultControlURL,
WantRunning: true,
NoSNAT: false,
NoStatefulFiltering: "false",
NetfilterMode: preftype.NetfilterOn,
CorpDNS: true,
AllowSingleHosts: true,
AutoUpdate: ipn.AutoUpdatePrefs{
Check: true,
},
@ -566,12 +672,14 @@ func TestPrefsFromUpArgs(t *testing.T) {
goos: "windows",
args: upArgsFromOSArgs("windows"),
want: &ipn.Prefs{
ControlURL: ipn.DefaultControlURL,
WantRunning: true,
CorpDNS: true,
AllowSingleHosts: true,
RouteAll: true,
NetfilterMode: preftype.NetfilterOn,
ControlURL: ipn.DefaultControlURL,
WantRunning: true,
CorpDNS: true,
AllowSingleHosts: true,
RouteAll: true,
NoSNAT: false,
NoStatefulFiltering: "false",
NetfilterMode: preftype.NetfilterOn,
AutoUpdate: ipn.AutoUpdatePrefs{
Check: true,
},
@ -589,7 +697,8 @@ func TestPrefsFromUpArgs(t *testing.T) {
netip.MustParsePrefix("0.0.0.0/0"),
netip.MustParsePrefix("::/0"),
},
NetfilterMode: preftype.NetfilterOn,
NoStatefulFiltering: "false",
NetfilterMode: preftype.NetfilterOn,
AutoUpdate: ipn.AutoUpdatePrefs{
Check: true,
},
@ -676,9 +785,10 @@ func TestPrefsFromUpArgs(t *testing.T) {
},
wantWarn: "netfilter=nodivert; add iptables calls to ts-* chains manually.",
want: &ipn.Prefs{
WantRunning: true,
NetfilterMode: preftype.NetfilterNoDivert,
NoSNAT: true,
WantRunning: true,
NetfilterMode: preftype.NetfilterNoDivert,
NoSNAT: true,
NoStatefulFiltering: "true",
AutoUpdate: ipn.AutoUpdatePrefs{
Check: true,
},
@ -692,9 +802,10 @@ func TestPrefsFromUpArgs(t *testing.T) {
},
wantWarn: "netfilter=off; configure iptables yourself.",
want: &ipn.Prefs{
WantRunning: true,
NetfilterMode: preftype.NetfilterOff,
NoSNAT: true,
WantRunning: true,
NetfilterMode: preftype.NetfilterOff,
NoSNAT: true,
NoStatefulFiltering: "true",
AutoUpdate: ipn.AutoUpdatePrefs{
Check: true,
},
@ -708,8 +819,9 @@ func TestPrefsFromUpArgs(t *testing.T) {
netfilterMode: "off",
},
want: &ipn.Prefs{
WantRunning: true,
NoSNAT: true,
WantRunning: true,
NoSNAT: true,
NoStatefulFiltering: "true",
AdvertiseRoutes: []netip.Prefix{
netip.MustParsePrefix("fd7a:115c:a1e0:b1a::bb:10.0.0.0/112"),
},
@ -726,8 +838,9 @@ func TestPrefsFromUpArgs(t *testing.T) {
netfilterMode: "off",
},
want: &ipn.Prefs{
WantRunning: true,
NoSNAT: true,
WantRunning: true,
NoSNAT: true,
NoStatefulFiltering: "true",
AdvertiseRoutes: []netip.Prefix{
netip.MustParsePrefix("fd7a:115c:a1e0:b1a::aabb:10.0.0.0/112"),
},
@ -803,7 +916,7 @@ func TestPrefFlagMapping(t *testing.T) {
}
prefType := reflect.TypeFor[ipn.Prefs]()
for i := 0; i < prefType.NumField(); i++ {
for i := range prefType.NumField() {
prefName := prefType.Field(i).Name
if prefHasFlag[prefName] {
continue
@ -829,10 +942,14 @@ func TestPrefFlagMapping(t *testing.T) {
// Handled by TS_DEBUG_FIREWALL_MODE env var, we don't want to have
// a CLI flag for this. The Pref is used by c2n.
continue
case "TailFSShares":
case "DriveShares":
// Handled by the tailscale share subcommand, we don't want a CLI
// flag for this.
continue
case "InternalExitNodePrior":
// Used internally by LocalBackend as part of exit node usage toggling.
// No CLI flag for this.
continue
}
t.Errorf("unexpected new ipn.Pref field %q is not handled by up.go (see addPrefFlagMapping and checkForAccidentalSettingReverts)", prefName)
}
@ -922,6 +1039,7 @@ func TestUpdatePrefs(t *testing.T) {
HostnameSet: true,
NetfilterModeSet: true,
NoSNATSet: true,
NoStatefulFilteringSet: true,
OperatorUserSet: true,
RouteAllSet: true,
RunSSHSet: true,

@ -27,7 +27,7 @@ func init() {
var configureKubeconfigCmd = &ffcli.Command{
Name: "kubeconfig",
ShortHelp: "[ALPHA] Connect to a Kubernetes cluster using a Tailscale Auth Proxy",
ShortUsage: "kubeconfig <hostname-or-fqdn>",
ShortUsage: "tailscale configure kubeconfig <hostname-or-fqdn>",
LongHelp: strings.TrimSpace(`
Run this command to configure kubectl to connect to a Kubernetes cluster over Tailscale.
@ -43,7 +43,20 @@ See: https://tailscale.com/s/k8s-auth-proxy
}
// kubeconfigPath returns the path to the kubeconfig file for the current user.
func kubeconfigPath() string {
func kubeconfigPath() (string, error) {
if kubeconfig := os.Getenv("KUBECONFIG"); kubeconfig != "" {
if version.IsSandboxedMacOS() {
return "", errors.New("$KUBECONFIG is incompatible with the App Store version")
}
var out string
for _, out = range filepath.SplitList(kubeconfig) {
if info, err := os.Stat(out); !os.IsNotExist(err) && !info.IsDir() {
break
}
}
return out, nil
}
var dir string
if version.IsSandboxedMacOS() {
// The HOME environment variable in macOS sandboxed apps is set to
@ -55,7 +68,7 @@ func kubeconfigPath() string {
} else {
dir = homedir.HomeDir()
}
return filepath.Join(dir, ".kube", "config")
return filepath.Join(dir, ".kube", "config"), nil
}
func runConfigureKubeconfig(ctx context.Context, args []string) error {
@ -76,7 +89,11 @@ func runConfigureKubeconfig(ctx context.Context, args []string) error {
return fmt.Errorf("no peer found with hostname %q", hostOrFQDN)
}
targetFQDN = strings.TrimSuffix(targetFQDN, ".")
if err := setKubeconfigForPeer(targetFQDN, kubeconfigPath()); err != nil {
var kubeconfig string
if kubeconfig, err = kubeconfigPath(); err != nil {
return err
}
if err = setKubeconfigForPeer(targetFQDN, kubeconfig); err != nil {
return err
}
printf("kubeconfig configured for %q\n", hostOrFQDN)
@ -119,7 +136,7 @@ func updateKubeconfig(cfgYaml []byte, fqdn string) ([]byte, error) {
var clusters []any
if cm, ok := cfg["clusters"]; ok {
clusters = cm.([]any)
clusters, _ = cm.([]any)
}
cfg["clusters"] = appendOrSetNamed(clusters, fqdn, map[string]any{
"name": fqdn,
@ -130,7 +147,7 @@ func updateKubeconfig(cfgYaml []byte, fqdn string) ([]byte, error) {
var users []any
if um, ok := cfg["users"]; ok {
users = um.([]any)
users, _ = um.([]any)
}
cfg["users"] = appendOrSetNamed(users, "tailscale-auth", map[string]any{
// We just need one of these, and can reuse it for all clusters.
@ -144,7 +161,7 @@ func updateKubeconfig(cfgYaml []byte, fqdn string) ([]byte, error) {
var contexts []any
if cm, ok := cfg["contexts"]; ok {
contexts = cm.([]any)
contexts, _ = cm.([]any)
}
cfg["contexts"] = appendOrSetNamed(contexts, fqdn, map[string]any{
"name": fqdn,

@ -48,6 +48,31 @@ contexts:
current-context: foo.tail-scale.ts.net
kind: Config
users:
- name: tailscale-auth
user:
token: unused`,
},
{
name: "all configs, clusters, users have been deleted",
in: `apiVersion: v1
clusters: null
contexts: null
kind: Config
current-context: some-non-existent-cluster
users: null`,
want: `apiVersion: v1
clusters:
- cluster:
server: https://foo.tail-scale.ts.net
name: foo.tail-scale.ts.net
contexts:
- context:
cluster: foo.tail-scale.ts.net
user: tailscale-auth
name: foo.tail-scale.ts.net
current-context: foo.tail-scale.ts.net
kind: Config
users:
- name: tailscale-auth
user:
token: unused`,

@ -0,0 +1,220 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package cli
import (
"context"
"encoding/json"
"errors"
"flag"
"fmt"
"log"
"os"
"os/exec"
"path"
"runtime"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/version/distro"
)
var synologyConfigureCertCmd = &ffcli.Command{
Name: "synology-cert",
Exec: runConfigureSynologyCert,
ShortHelp: "Configure Synology with a TLS certificate for your tailnet",
ShortUsage: "synology-cert [--domain <domain>]",
LongHelp: strings.TrimSpace(`
This command is intended to run periodically as root on a Synology device to
create or refresh the TLS certificate for the tailnet domain.
See: https://tailscale.com/kb/1153/enabling-https
`),
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("synology-cert")
fs.StringVar(&synologyConfigureCertArgs.domain, "domain", "", "Tailnet domain to create or refresh certificates for. Ignored if only one domain exists.")
return fs
})(),
}
var synologyConfigureCertArgs struct {
domain string
}
func runConfigureSynologyCert(ctx context.Context, args []string) error {
if len(args) > 0 {
return errors.New("unknown arguments")
}
if runtime.GOOS != "linux" || distro.Get() != distro.Synology {
return errors.New("only implemented on Synology")
}
if uid := os.Getuid(); uid != 0 {
return fmt.Errorf("must be run as root, not %q (%v)", os.Getenv("USER"), uid)
}
hi := hostinfo.New()
isDSM6 := strings.HasPrefix(hi.DistroVersion, "6.")
isDSM7 := strings.HasPrefix(hi.DistroVersion, "7.")
if !isDSM6 && !isDSM7 {
return fmt.Errorf("unsupported DSM version %q", hi.DistroVersion)
}
domain := synologyConfigureCertArgs.domain
if st, err := localClient.Status(ctx); err == nil {
if st.BackendState != ipn.Running.String() {
return fmt.Errorf("Tailscale is not running.")
} else if len(st.CertDomains) == 0 {
return fmt.Errorf("TLS certificate support is not enabled/configured for your tailnet.")
} else if len(st.CertDomains) == 1 {
if domain != "" && domain != st.CertDomains[0] {
log.Printf("Ignoring supplied domain %q, TLS certificate will be created for %q.\n", domain, st.CertDomains[0])
}
domain = st.CertDomains[0]
} else {
var found bool
for _, d := range st.CertDomains {
if d == domain {
found = true
break
}
}
if !found {
return fmt.Errorf("Domain %q was not one of the valid domain options: %q.", domain, st.CertDomains)
}
}
}
// Check for an existing certificate, and replace it if it already exists
var id string
certs, err := listCerts(ctx, synowebapiCommand{})
if err != nil {
return err
}
for _, c := range certs {
if c.Subject.CommonName == domain {
id = c.ID
break
}
}
certPEM, keyPEM, err := localClient.CertPair(ctx, domain)
if err != nil {
return err
}
// Certs have to be written to file for the upload command to work.
tmpDir, err := os.MkdirTemp("", "")
if err != nil {
return fmt.Errorf("can't create temp dir: %w", err)
}
defer os.RemoveAll(tmpDir)
keyFile := path.Join(tmpDir, "key.pem")
os.WriteFile(keyFile, keyPEM, 0600)
certFile := path.Join(tmpDir, "cert.pem")
os.WriteFile(certFile, certPEM, 0600)
if err := uploadCert(ctx, synowebapiCommand{}, certFile, keyFile, id); err != nil {
return err
}
return nil
}
type subject struct {
CommonName string `json:"common_name"`
}
type certificateInfo struct {
ID string `json:"id"`
Desc string `json:"desc"`
Subject subject `json:"subject"`
}
// listCerts fetches a list of the certificates that DSM knows about
func listCerts(ctx context.Context, c synoAPICaller) ([]certificateInfo, error) {
rawData, err := c.Call(ctx, "SYNO.Core.Certificate.CRT", "list", nil)
if err != nil {
return nil, err
}
var payload struct {
Certificates []certificateInfo `json:"certificates"`
}
if err := json.Unmarshal(rawData, &payload); err != nil {
return nil, fmt.Errorf("decoding certificate list response payload: %w", err)
}
return payload.Certificates, nil
}
// uploadCert creates or replaces a certificate. If id is given, it will attempt to replace the certificate with that ID.
func uploadCert(ctx context.Context, c synoAPICaller, certFile, keyFile string, id string) error {
params := map[string]string{
"key_tmp": keyFile,
"cert_tmp": certFile,
"desc": "Tailnet Certificate",
}
if id != "" {
params["id"] = id
}
rawData, err := c.Call(ctx, "SYNO.Core.Certificate", "import", params)
if err != nil {
return err
}
var payload struct {
NewID string `json:"id"`
}
if err := json.Unmarshal(rawData, &payload); err != nil {
return fmt.Errorf("decoding certificate upload response payload: %w", err)
}
log.Printf("Tailnet Certificate uploaded with ID %q.", payload.NewID)
return nil
}
type synoAPICaller interface {
Call(context.Context, string, string, map[string]string) (json.RawMessage, error)
}
type apiResponse struct {
Success bool `json:"success"`
Error *apiError `json:"error,omitempty"`
Data json.RawMessage `json:"data"`
}
type apiError struct {
Code int64 `json:"code"`
Errors string `json:"errors"`
}
// synowebapiCommand implements synoAPICaller using the /usr/syno/bin/synowebapi binary. Must be run as root.
type synowebapiCommand struct{}
func (s synowebapiCommand) Call(ctx context.Context, api, method string, params map[string]string) (json.RawMessage, error) {
args := []string{"--exec", fmt.Sprintf("api=%s", api), fmt.Sprintf("method=%s", method)}
for k, v := range params {
args = append(args, fmt.Sprintf("%s=%q", k, v))
}
out, err := exec.CommandContext(ctx, "/usr/syno/bin/synowebapi", args...).Output()
if err != nil {
return nil, fmt.Errorf("calling %q method of %q API: %v, %s", method, api, err, out)
}
var payload apiResponse
if err := json.Unmarshal(out, &payload); err != nil {
return nil, fmt.Errorf("decoding response json from %q method of %q API: %w", method, api, err)
}
if payload.Error != nil {
return nil, fmt.Errorf("error response from %q method of %q API: %v", method, api, payload.Error)
}
return payload.Data, nil
}

@ -0,0 +1,140 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package cli
import (
"context"
"encoding/json"
"fmt"
"reflect"
"testing"
)
type fakeAPICaller struct {
Data json.RawMessage
Error error
}
func (c fakeAPICaller) Call(_ context.Context, _, _ string, _ map[string]string) (json.RawMessage, error) {
return c.Data, c.Error
}
func Test_listCerts(t *testing.T) {
tests := []struct {
name string
caller synoAPICaller
want []certificateInfo
wantErr bool
}{
{
name: "normal response",
caller: fakeAPICaller{
Data: json.RawMessage(`{
"certificates" : [
{
"desc" : "Tailnet Certificate",
"id" : "cG2XBt",
"is_broken" : false,
"is_default" : false,
"issuer" : {
"common_name" : "R3",
"country" : "US",
"organization" : "Let's Encrypt"
},
"key_types" : "ECC",
"renewable" : false,
"services" : [
{
"display_name" : "DSM Desktop Service",
"display_name_i18n" : "common:web_desktop",
"isPkg" : false,
"multiple_cert" : true,
"owner" : "root",
"service" : "default",
"subscriber" : "system",
"user_setable" : true
}
],
"signature_algorithm" : "sha256WithRSAEncryption",
"subject" : {
"common_name" : "foo.tailscale.ts.net",
"sub_alt_name" : [ "foo.tailscale.ts.net" ]
},
"user_deletable" : true,
"valid_from" : "Sep 26 11:39:43 2023 GMT",
"valid_till" : "Dec 25 11:39:42 2023 GMT"
},
{
"desc" : "",
"id" : "sgmnpb",
"is_broken" : false,
"is_default" : false,
"issuer" : {
"city" : "Taipei",
"common_name" : "Synology Inc. CA",
"country" : "TW",
"organization" : "Synology Inc."
},
"key_types" : "",
"renewable" : false,
"self_signed_cacrt_info" : {
"issuer" : {
"city" : "Taipei",
"common_name" : "Synology Inc. CA",
"country" : "TW",
"organization" : "Synology Inc."
},
"subject" : {
"city" : "Taipei",
"common_name" : "Synology Inc. CA",
"country" : "TW",
"organization" : "Synology Inc."
}
},
"services" : [],
"signature_algorithm" : "sha256WithRSAEncryption",
"subject" : {
"city" : "Taipei",
"common_name" : "synology.com",
"country" : "TW",
"organization" : "Synology Inc.",
"sub_alt_name" : []
},
"user_deletable" : true,
"valid_from" : "May 27 00:23:19 2019 GMT",
"valid_till" : "Feb 11 00:23:19 2039 GMT"
}
]
}`),
Error: nil,
},
want: []certificateInfo{
{Desc: "Tailnet Certificate", ID: "cG2XBt", Subject: subject{CommonName: "foo.tailscale.ts.net"}},
{Desc: "", ID: "sgmnpb", Subject: subject{CommonName: "synology.com"}},
},
},
{
name: "call error",
caller: fakeAPICaller{nil, fmt.Errorf("caller failed")},
wantErr: true,
},
{
name: "payload decode error",
caller: fakeAPICaller{json.RawMessage("This isn't JSON!"), nil},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := listCerts(context.Background(), tt.caller)
if (err != nil) != tt.wantErr {
t.Errorf("listCerts() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("listCerts() = %v, want %v", got, tt.want)
}
})
}
}

@ -22,10 +22,11 @@ import (
// used to configure Synology devices, but is now a compatibility alias to
// "tailscale configure synology".
var configureHostCmd = &ffcli.Command{
Name: "configure-host",
Exec: runConfigureSynology,
ShortHelp: synologyConfigureCmd.ShortHelp,
LongHelp: synologyConfigureCmd.LongHelp,
Name: "configure-host",
Exec: runConfigureSynology,
ShortUsage: "tailscale configure-host\n" + synologyConfigureCmd.ShortUsage,
ShortHelp: synologyConfigureCmd.ShortHelp,
LongHelp: hidden + synologyConfigureCmd.LongHelp,
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("configure-host")
return fs
@ -33,9 +34,10 @@ var configureHostCmd = &ffcli.Command{
}
var synologyConfigureCmd = &ffcli.Command{
Name: "synology",
Exec: runConfigureSynology,
ShortHelp: "Configure Synology to enable outbound connections",
Name: "synology",
Exec: runConfigureSynology,
ShortUsage: "tailscale configure synology",
ShortHelp: "Configure Synology to enable outbound connections",
LongHelp: strings.TrimSpace(`
This command is intended to run at boot as root on a Synology device to
create the /dev/net/tun device and give the tailscaled binary permission

@ -4,7 +4,6 @@
package cli
import (
"context"
"flag"
"runtime"
"strings"
@ -14,8 +13,9 @@ import (
)
var configureCmd = &ffcli.Command{
Name: "configure",
ShortHelp: "[ALPHA] Configure the host to enable more Tailscale features",
Name: "configure",
ShortUsage: "tailscale configure <subcommand>",
ShortHelp: "[ALPHA] Configure the host to enable more Tailscale features",
LongHelp: strings.TrimSpace(`
The 'configure' set of commands are intended to provide a way to enable different
services on the host to use Tailscale in more ways.
@ -25,14 +25,12 @@ services on the host to use Tailscale in more ways.
return fs
})(),
Subcommands: configureSubcommands(),
Exec: func(ctx context.Context, args []string) error {
return flag.ErrHelp
},
}
func configureSubcommands() (out []*ffcli.Command) {
if runtime.GOOS == "linux" && distro.Get() == distro.Synology {
out = append(out, synologyConfigureCmd)
out = append(out, synologyConfigureCertCmd)
}
return out
}

@ -45,9 +45,11 @@ import (
)
var debugCmd = &ffcli.Command{
Name: "debug",
Exec: runDebug,
LongHelp: `"tailscale debug" contains misc debug facilities; it is not a stable interface.`,
Name: "debug",
Exec: runDebug,
ShortUsage: "tailscale debug <debug-flags | subcommand>",
ShortHelp: "Debug commands",
LongHelp: hidden + `"tailscale debug" contains misc debug facilities; it is not a stable interface.`,
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("debug")
fs.StringVar(&debugArgs.file, "file", "", "get, delete:NAME, or NAME")
@ -58,15 +60,16 @@ var debugCmd = &ffcli.Command{
})(),
Subcommands: []*ffcli.Command{
{
Name: "derp-map",
Exec: runDERPMap,
ShortHelp: "print DERP map",
Name: "derp-map",
ShortUsage: "tailscale debug derp-map",
Exec: runDERPMap,
ShortHelp: "Print DERP map",
},
{
Name: "component-logs",
Exec: runDebugComponentLogs,
ShortHelp: "enable/disable debug logs for a component",
ShortUsage: "tailscale debug component-logs [" + strings.Join(ipn.DebuggableComponents, "|") + "]",
Exec: runDebugComponentLogs,
ShortHelp: "Enable/disable debug logs for a component",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("component-logs")
fs.DurationVar(&debugComponentLogsArgs.forDur, "for", time.Hour, "how long to enable debug logs for; zero or negative means to disable")
@ -74,14 +77,16 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "daemon-goroutines",
Exec: runDaemonGoroutines,
ShortHelp: "print tailscaled's goroutines",
Name: "daemon-goroutines",
ShortUsage: "tailscale debug daemon-goroutines",
Exec: runDaemonGoroutines,
ShortHelp: "Print tailscaled's goroutines",
},
{
Name: "daemon-logs",
Exec: runDaemonLogs,
ShortHelp: "watch tailscaled's server logs",
Name: "daemon-logs",
ShortUsage: "tailscale debug daemon-logs",
Exec: runDaemonLogs,
ShortHelp: "Watch tailscaled's server logs",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("daemon-logs")
fs.IntVar(&daemonLogsArgs.verbose, "verbose", 0, "verbosity level")
@ -90,9 +95,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "metrics",
Exec: runDaemonMetrics,
ShortHelp: "print tailscaled's metrics",
Name: "metrics",
ShortUsage: "tailscale debug metrics",
Exec: runDaemonMetrics,
ShortHelp: "Print tailscaled's metrics",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("metrics")
fs.BoolVar(&metricsArgs.watch, "watch", false, "print JSON dump of delta values")
@ -100,80 +106,95 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "env",
Exec: runEnv,
ShortHelp: "print cmd/tailscale environment",
Name: "env",
ShortUsage: "tailscale debug env",
Exec: runEnv,
ShortHelp: "Print cmd/tailscale environment",
},
{
Name: "stat",
Exec: runStat,
ShortHelp: "stat a file",
Name: "stat",
ShortUsage: "tailscale debug stat <files...>",
Exec: runStat,
ShortHelp: "Stat a file",
},
{
Name: "hostinfo",
Exec: runHostinfo,
ShortHelp: "print hostinfo",
Name: "hostinfo",
ShortUsage: "tailscale debug hostinfo",
Exec: runHostinfo,
ShortHelp: "Print hostinfo",
},
{
Name: "local-creds",
Exec: runLocalCreds,
ShortHelp: "print how to access Tailscale LocalAPI",
Name: "local-creds",
ShortUsage: "tailscale debug local-creds",
Exec: runLocalCreds,
ShortHelp: "Print how to access Tailscale LocalAPI",
},
{
Name: "restun",
Exec: localAPIAction("restun"),
ShortHelp: "force a magicsock restun",
Name: "restun",
ShortUsage: "tailscale debug restun",
Exec: localAPIAction("restun"),
ShortHelp: "Force a magicsock restun",
},
{
Name: "rebind",
Exec: localAPIAction("rebind"),
ShortHelp: "force a magicsock rebind",
Name: "rebind",
ShortUsage: "tailscale debug rebind",
Exec: localAPIAction("rebind"),
ShortHelp: "Force a magicsock rebind",
},
{
Name: "derp-set-on-demand",
Exec: localAPIAction("derp-set-homeless"),
ShortHelp: "enable DERP on-demand mode (breaks reachability)",
Name: "derp-set-on-demand",
ShortUsage: "tailscale debug derp-set-on-demand",
Exec: localAPIAction("derp-set-homeless"),
ShortHelp: "Enable DERP on-demand mode (breaks reachability)",
},
{
Name: "derp-unset-on-demand",
Exec: localAPIAction("derp-unset-homeless"),
ShortHelp: "disable DERP on-demand mode",
Name: "derp-unset-on-demand",
ShortUsage: "tailscale debug derp-unset-on-demand",
Exec: localAPIAction("derp-unset-homeless"),
ShortHelp: "Disable DERP on-demand mode",
},
{
Name: "break-tcp-conns",
Exec: localAPIAction("break-tcp-conns"),
ShortHelp: "break any open TCP connections from the daemon",
Name: "break-tcp-conns",
ShortUsage: "tailscale debug break-tcp-conns",
Exec: localAPIAction("break-tcp-conns"),
ShortHelp: "Break any open TCP connections from the daemon",
},
{
Name: "break-derp-conns",
Exec: localAPIAction("break-derp-conns"),
ShortHelp: "break any open DERP connections from the daemon",
Name: "break-derp-conns",
ShortUsage: "tailscale debug break-derp-conns",
Exec: localAPIAction("break-derp-conns"),
ShortHelp: "Break any open DERP connections from the daemon",
},
{
Name: "pick-new-derp",
Exec: localAPIAction("pick-new-derp"),
ShortHelp: "switch to some other random DERP home region for a short time",
Name: "pick-new-derp",
ShortUsage: "tailscale debug pick-new-derp",
Exec: localAPIAction("pick-new-derp"),
ShortHelp: "Switch to some other random DERP home region for a short time",
},
{
Name: "force-netmap-update",
Exec: localAPIAction("force-netmap-update"),
ShortHelp: "force a full no-op netmap update (for load testing)",
Name: "force-netmap-update",
ShortUsage: "tailscale debug force-netmap-update",
Exec: localAPIAction("force-netmap-update"),
ShortHelp: "Force a full no-op netmap update (for load testing)",
},
{
// TODO(bradfitz,maisem): eventually promote this out of debug
Name: "reload-config",
Exec: reloadConfig,
ShortHelp: "reload config",
Name: "reload-config",
ShortUsage: "tailscale debug reload-config",
Exec: reloadConfig,
ShortHelp: "Reload config",
},
{
Name: "control-knobs",
Exec: debugControlKnobs,
ShortHelp: "see current control knobs",
Name: "control-knobs",
ShortUsage: "tailscale debug control-knobs",
Exec: debugControlKnobs,
ShortHelp: "See current control knobs",
},
{
Name: "prefs",
Exec: runPrefs,
ShortHelp: "print prefs",
Name: "prefs",
ShortUsage: "tailscale debug prefs",
Exec: runPrefs,
ShortHelp: "Print prefs",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("prefs")
fs.BoolVar(&prefsArgs.pretty, "pretty", false, "If true, pretty-print output")
@ -181,9 +202,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "watch-ipn",
Exec: runWatchIPN,
ShortHelp: "subscribe to IPN message bus",
Name: "watch-ipn",
ShortUsage: "tailscale debug watch-ipn",
Exec: runWatchIPN,
ShortHelp: "Subscribe to IPN message bus",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("watch-ipn")
fs.BoolVar(&watchIPNArgs.netmap, "netmap", true, "include netmap in messages")
@ -194,9 +216,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "netmap",
Exec: runNetmap,
ShortHelp: "print the current network map",
Name: "netmap",
ShortUsage: "tailscale debug netmap",
Exec: runNetmap,
ShortHelp: "Print the current network map",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("netmap")
fs.BoolVar(&netmapArgs.showPrivateKey, "show-private-key", false, "include node private key in printed netmap")
@ -204,14 +227,17 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "via",
Name: "via",
ShortUsage: "tailscale debug via <site-id> <v4-cidr>\n" +
"tailscale debug via <v6-route>",
Exec: runVia,
ShortHelp: "convert between site-specific IPv4 CIDRs and IPv6 'via' routes",
ShortHelp: "Convert between site-specific IPv4 CIDRs and IPv6 'via' routes",
},
{
Name: "ts2021",
Exec: runTS2021,
ShortHelp: "debug ts2021 protocol connectivity",
Name: "ts2021",
ShortUsage: "tailscale debug ts2021",
Exec: runTS2021,
ShortHelp: "Debug ts2021 protocol connectivity",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("ts2021")
fs.StringVar(&ts2021Args.host, "host", "controlplane.tailscale.com", "hostname of control plane")
@ -221,9 +247,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "set-expire",
Exec: runSetExpire,
ShortHelp: "manipulate node key expiry for testing",
Name: "set-expire",
ShortUsage: "tailscale debug set-expire --in=1m",
Exec: runSetExpire,
ShortHelp: "Manipulate node key expiry for testing",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("set-expire")
fs.DurationVar(&setExpireArgs.in, "in", 0, "if non-zero, set node key to expire this duration from now")
@ -231,9 +258,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "dev-store-set",
Exec: runDevStoreSet,
ShortHelp: "set a key/value pair during development",
Name: "dev-store-set",
ShortUsage: "tailscale debug dev-store-set",
Exec: runDevStoreSet,
ShortHelp: "Set a key/value pair during development",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("store-set")
fs.BoolVar(&devStoreSetArgs.danger, "danger", false, "accept danger")
@ -241,14 +269,16 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "derp",
Exec: runDebugDERP,
ShortHelp: "test a DERP configuration",
Name: "derp",
ShortUsage: "tailscale debug derp",
Exec: runDebugDERP,
ShortHelp: "Test a DERP configuration",
},
{
Name: "capture",
Exec: runCapture,
ShortHelp: "streams pcaps for debugging",
Name: "capture",
ShortUsage: "tailscale debug capture",
Exec: runCapture,
ShortHelp: "Streams pcaps for debugging",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("capture")
fs.StringVar(&captureArgs.outFile, "o", "", "path to stream the pcap (or - for stdout), leave empty to start wireshark")
@ -256,9 +286,10 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "portmap",
Exec: debugPortmap,
ShortHelp: "run portmap debugging",
Name: "portmap",
ShortUsage: "tailscale debug portmap",
Exec: debugPortmap,
ShortHelp: "Run portmap debugging",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("portmap")
fs.DurationVar(&debugPortmapArgs.duration, "duration", 5*time.Second, "timeout for port mapping")
@ -270,14 +301,16 @@ var debugCmd = &ffcli.Command{
})(),
},
{
Name: "peer-endpoint-changes",
Exec: runPeerEndpointChanges,
ShortHelp: "prints debug information about a peer's endpoint changes",
Name: "peer-endpoint-changes",
ShortUsage: "tailscale debug peer-endpoint-changes <hostname-or-IP>",
Exec: runPeerEndpointChanges,
ShortHelp: "Prints debug information about a peer's endpoint changes",
},
{
Name: "dial-types",
Exec: runDebugDialTypes,
ShortHelp: "prints debug information about connecting to a given host or IP",
Name: "dial-types",
ShortUsage: "tailscale debug dial-types <hostname-or-IP> <port>",
Exec: runDebugDialTypes,
ShortHelp: "Prints debug information about connecting to a given host or IP",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("dial-types")
fs.StringVar(&debugDialTypesArgs.network, "network", "tcp", `network type to dial ("tcp", "udp", etc.)`)
@ -314,7 +347,7 @@ func outName(dst string) string {
func runDebug(ctx context.Context, args []string) error {
if len(args) > 0 {
return errors.New("unknown arguments")
return fmt.Errorf("tailscale debug: unknown subcommand: %s", args[0])
}
var usedFlag bool
if out := debugArgs.cpuFile; out != "" {
@ -369,7 +402,7 @@ func runDebug(ctx context.Context, args []string) error {
// to subcommands.
return nil
}
return errors.New("see 'tailscale debug --help")
return errors.New("tailscale debug: subcommand or flag required")
}
func runLocalCreds(ctx context.Context, args []string) error {
@ -453,7 +486,7 @@ func runWatchIPN(ctx context.Context, args []string) error {
return err
}
defer watcher.Close()
fmt.Fprintf(os.Stderr, "Connected.\n")
fmt.Fprintf(Stderr, "Connected.\n")
for seen := 0; watchIPNArgs.count == 0 || seen < watchIPNArgs.count; seen++ {
n, err := watcher.Next()
if err != nil {
@ -563,7 +596,7 @@ func runStat(ctx context.Context, args []string) error {
func runHostinfo(ctx context.Context, args []string) error {
hi := hostinfo.New()
j, _ := json.MarshalIndent(hi, "", " ")
os.Stdout.Write(j)
Stdout.Write(j)
return nil
}
@ -716,7 +749,7 @@ var ts2021Args struct {
}
func runTS2021(ctx context.Context, args []string) error {
log.SetOutput(os.Stdout)
log.SetOutput(Stdout)
log.SetFlags(log.Ltime | log.Lmicroseconds)
keysURL := "https://" + ts2021Args.host + "/key?v=" + strconv.Itoa(ts2021Args.version)
@ -810,7 +843,7 @@ var debugComponentLogsArgs struct {
func runDebugComponentLogs(ctx context.Context, args []string) error {
if len(args) != 1 {
return errors.New("usage: debug component-logs [" + strings.Join(ipn.DebuggableComponents, "|") + "]")
return errors.New("usage: tailscale debug component-logs [" + strings.Join(ipn.DebuggableComponents, "|") + "]")
}
component := args[0]
dur := debugComponentLogsArgs.forDur
@ -833,7 +866,7 @@ var devStoreSetArgs struct {
func runDevStoreSet(ctx context.Context, args []string) error {
if len(args) != 2 {
return errors.New("usage: dev-store-set --danger <key> <value>")
return errors.New("usage: tailscale debug dev-store-set --danger <key> <value>")
}
if !devStoreSetArgs.danger {
return errors.New("this command is dangerous; use --danger to proceed")
@ -851,7 +884,7 @@ func runDevStoreSet(ctx context.Context, args []string) error {
func runDebugDERP(ctx context.Context, args []string) error {
if len(args) != 1 {
return errors.New("usage: debug derp <region>")
return errors.New("usage: tailscale debug derp <region>")
}
st, err := localClient.DebugDERPRegion(ctx, args[0])
if err != nil {
@ -867,7 +900,7 @@ var setExpireArgs struct {
func runSetExpire(ctx context.Context, args []string) error {
if len(args) != 0 || setExpireArgs.in == 0 {
return errors.New("usage --in=<duration>")
return errors.New("usage: tailscale debug set-expire --in=<duration>")
}
return localClient.DebugSetExpireIn(ctx, setExpireArgs.in)
}
@ -885,7 +918,7 @@ func runCapture(ctx context.Context, args []string) error {
switch captureArgs.outFile {
case "-":
fmt.Fprintln(os.Stderr, "Press Ctrl-C to stop the capture.")
fmt.Fprintln(Stderr, "Press Ctrl-C to stop the capture.")
_, err = io.Copy(os.Stdout, stream)
return err
case "":
@ -911,7 +944,7 @@ func runCapture(ctx context.Context, args []string) error {
return err
}
defer f.Close()
fmt.Fprintln(os.Stderr, "Press Ctrl-C to stop the capture.")
fmt.Fprintln(Stderr, "Press Ctrl-C to stop the capture.")
_, err = io.Copy(f, stream)
return err
}
@ -966,7 +999,7 @@ func runPeerEndpointChanges(ctx context.Context, args []string) error {
}
if len(args) != 1 || args[0] == "" {
return errors.New("usage: peer-status <hostname-or-IP>")
return errors.New("usage: tailscale debug peer-endpoint-changes <hostname-or-IP>")
}
var ip string
@ -1042,7 +1075,7 @@ func runDebugDialTypes(ctx context.Context, args []string) error {
}
if len(args) != 2 || args[0] == "" || args[1] == "" {
return errors.New("usage: dial-types <hostname-or-IP> <port>")
return errors.New("usage: tailscale debug dial-types <hostname-or-IP> <port>")
}
port, err := strconv.ParseUint(args[1], 10, 16)

@ -14,7 +14,7 @@ import (
var downCmd = &ffcli.Command{
Name: "down",
ShortUsage: "down",
ShortUsage: "tailscale down",
ShortHelp: "Disconnect from Tailscale",
Exec: runDown,

@ -5,116 +5,113 @@ package cli
import (
"context"
"errors"
"fmt"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/tailfs"
"tailscale.com/drive"
)
const (
shareSetUsage = "share set <name> <path>"
shareRenameUsage = "share rename <oldname> <newname>"
shareRemoveUsage = "share remove <name>"
shareListUsage = "share list"
driveShareUsage = "tailscale drive share <name> <path>"
driveRenameUsage = "tailscale drive rename <oldname> <newname>"
driveUnshareUsage = "tailscale drive unshare <name>"
driveListUsage = "tailscale drive list"
)
var shareCmd = &ffcli.Command{
Name: "share",
var driveCmd = &ffcli.Command{
Name: "drive",
ShortHelp: "Share a directory with your tailnet",
ShortUsage: strings.Join([]string{
shareSetUsage,
shareRemoveUsage,
shareListUsage,
}, "\n "),
driveShareUsage,
driveRenameUsage,
driveUnshareUsage,
driveListUsage,
}, "\n"),
LongHelp: buildShareLongHelp(),
UsageFunc: usageFuncNoDefaultValues,
Subcommands: []*ffcli.Command{
{
Name: "set",
Exec: runShareSet,
ShortHelp: "[ALPHA] set a share",
UsageFunc: usageFunc,
Name: "share",
ShortUsage: driveShareUsage,
Exec: runDriveShare,
ShortHelp: "[ALPHA] Create or modify a share",
},
{
Name: "rename",
ShortHelp: "[ALPHA] rename a share",
Exec: runShareRename,
UsageFunc: usageFunc,
Name: "rename",
ShortUsage: driveRenameUsage,
ShortHelp: "[ALPHA] Rename a share",
Exec: runDriveRename,
},
{
Name: "remove",
ShortHelp: "[ALPHA] remove a share",
Exec: runShareRemove,
UsageFunc: usageFunc,
Name: "unshare",
ShortUsage: driveUnshareUsage,
ShortHelp: "[ALPHA] Remove a share",
Exec: runDriveUnshare,
},
{
Name: "list",
ShortHelp: "[ALPHA] list current shares",
Exec: runShareList,
UsageFunc: usageFunc,
Name: "list",
ShortUsage: driveListUsage,
ShortHelp: "[ALPHA] List current shares",
Exec: runDriveList,
},
},
Exec: func(context.Context, []string) error {
return errors.New("share subcommand required; run 'tailscale share -h' for details")
},
}
// runShareSet is the entry point for the "tailscale share set" command.
func runShareSet(ctx context.Context, args []string) error {
// runDriveShare is the entry point for the "tailscale drive share" command.
func runDriveShare(ctx context.Context, args []string) error {
if len(args) != 2 {
return fmt.Errorf("usage: tailscale %v", shareSetUsage)
return fmt.Errorf("usage: %s", driveShareUsage)
}
name, path := args[0], args[1]
err := localClient.TailFSShareSet(ctx, &tailfs.Share{
err := localClient.DriveShareSet(ctx, &drive.Share{
Name: name,
Path: path,
})
if err == nil {
fmt.Printf("Set share %q at %q\n", name, path)
fmt.Printf("Sharing %q as %q\n", path, name)
}
return err
}
// runShareRemove is the entry point for the "tailscale share remove" command.
func runShareRemove(ctx context.Context, args []string) error {
// runDriveUnshare is the entry point for the "tailscale drive unshare" command.
func runDriveUnshare(ctx context.Context, args []string) error {
if len(args) != 1 {
return fmt.Errorf("usage: tailscale %v", shareRemoveUsage)
return fmt.Errorf("usage: %s", driveUnshareUsage)
}
name := args[0]
err := localClient.TailFSShareRemove(ctx, name)
err := localClient.DriveShareRemove(ctx, name)
if err == nil {
fmt.Printf("Removed share %q\n", name)
fmt.Printf("No longer sharing %q\n", name)
}
return err
}
// runShareRename is the entry point for the "tailscale share rename" command.
func runShareRename(ctx context.Context, args []string) error {
// runDriveRename is the entry point for the "tailscale drive rename" command.
func runDriveRename(ctx context.Context, args []string) error {
if len(args) != 2 {
return fmt.Errorf("usage: tailscale %v", shareRenameUsage)
return fmt.Errorf("usage: %s", driveRenameUsage)
}
oldName := args[0]
newName := args[1]
err := localClient.TailFSShareRename(ctx, oldName, newName)
err := localClient.DriveShareRename(ctx, oldName, newName)
if err == nil {
fmt.Printf("Renamed share %q to %q\n", oldName, newName)
}
return err
}
// runShareList is the entry point for the "tailscale share list" command.
func runShareList(ctx context.Context, args []string) error {
// runDriveList is the entry point for the "tailscale drive list" command.
func runDriveList(ctx context.Context, args []string) error {
if len(args) != 0 {
return fmt.Errorf("usage: tailscale %v", shareListUsage)
return fmt.Errorf("usage: %s", driveListUsage)
}
shares, err := localClient.TailFSShareList(ctx)
shares, err := localClient.DriveShareList(ctx)
if err != nil {
return err
}
@ -145,17 +142,17 @@ func runShareList(ctx context.Context, args []string) error {
func buildShareLongHelp() string {
longHelpAs := ""
if tailfs.AllowShareAs() {
if drive.AllowShareAs() {
longHelpAs = shareLongHelpAs
}
return fmt.Sprintf(shareLongHelpBase, longHelpAs)
}
var shareLongHelpBase = `Tailscale share allows you to share directories with other machines on your tailnet.
var shareLongHelpBase = `Taildrive allows you to share directories with other machines on your tailnet.
In order to share folders, your node needs to have the node attribute "tailfs:share".
In order to share folders, your node needs to have the node attribute "drive:share".
In order to access shares, your node needs to have the node attribute "tailfs:access".
In order to access shares, your node needs to have the node attribute "drive:access".
For example, to enable sharing and accessing shares for all member nodes:
@ -163,14 +160,14 @@ For example, to enable sharing and accessing shares for all member nodes:
{
"target": ["autogroup:member"],
"attr": [
"tailfs:share",
"tailfs:access",
"drive:share",
"drive:access",
],
}]
Each share is identified by a name and points to a directory at a specific path. For example, to share the path /Users/me/Documents under the name "docs", you would run:
$ tailscale share set docs /Users/me/Documents
$ tailscale drive share docs /Users/me/Documents
Note that the system forces share names to lowercase to avoid problems with clients that don't support case-sensitive filenames.
@ -184,60 +181,50 @@ In order to access this share, other machines on the tailnet can connect to the
http://100.100.100.100:8080/mydomain.com/mylaptop/docs
Permissions to access shares are controlled via ACLs. For example, to give yourself read/write access and give the group "home" read-only access to the above share, use the below ACL grants:
Permissions to access shares are controlled via ACLs. For example, to give the group "home" read-only access to the above share, use the below ACL grant:
"grants": [
{
"src": ["mylogin@domain.com"],
"dst": ["mylaptop's ip address"],
"app": {
"tailscale.com/cap/tailfs": [{
"shares": ["docs"],
"access": "rw"
}]
}
},
{
"src": ["group:home"],
"dst": ["mylaptop"],
"app": {
"tailscale.com/cap/tailfs": [{
"tailscale.com/cap/drive": [{
"shares": ["docs"],
"access": "ro"
}]
}
}]
To categorically give yourself access to all your shares, you can use the below ACL grant:
Whenever anyone in the group "home" connects to the share, they connect as if they are using your local machine user. They'll be able to read the same files as your user, and if they create files, those files will be owned by your user.%s
"grants": [
{
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"app": {
"tailscale.com/cap/tailfs": [{
"shares": ["*"],
"access": "rw"
}]
}
}]
On small tailnets, it may be convenient to categorically give all users full access to their own shares. That can be accomplished with the below grant.
Whenever either you or anyone in the group "home" connects to the share, they connect as if they are using your local machine user. They'll be able to read the same files as your user and if they create files, those files will be owned by your user.%s
"grants": [
{
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"app": {
"tailscale.com/cap/drive": [{
"shares": ["*"],
"access": "rw"
}]
}
}]
You can rename shares, for example you could rename the above share by running:
$ tailscale share rename docs newdocs
$ tailscale drive rename docs newdocs
You can remove shares by name, for example you could remove the above share by running:
$ tailscale share remove newdocs
$ tailscale drive unshare newdocs
You can get a list of currently published shares by running:
$ tailscale share list`
$ tailscale drive list`
var shareLongHelpAs = `
const shareLongHelpAs = `
If you want a share to be accessed as a different user, you can use sudo to accomplish this. For example, to create the aforementioned share as "theuser", you could run:
$ sudo -u theuser tailscale share set docs /Users/theuser/Documents`
$ sudo -u theuser tailscale drive share docs /Users/theuser/Documents`

@ -9,44 +9,85 @@ import (
"errors"
"flag"
"fmt"
"os"
"slices"
"strings"
"text/tabwriter"
"github.com/peterbourgon/ff/v3/ffcli"
xmaps "golang.org/x/exp/maps"
"tailscale.com/envknob"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
)
var exitNodeCmd = &ffcli.Command{
Name: "exit-node",
ShortUsage: "exit-node [flags]",
ShortHelp: "Show machines on your tailnet configured as exit nodes",
LongHelp: "Show machines on your tailnet configured as exit nodes",
Subcommands: []*ffcli.Command{
{
Name: "list",
ShortUsage: "exit-node list [flags]",
ShortHelp: "Show exit nodes",
Exec: runExitNodeList,
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("list")
fs.StringVar(&exitNodeArgs.filter, "filter", "", "filter exit nodes by country")
return fs
})(),
},
},
Exec: func(context.Context, []string) error {
return errors.New("exit-node subcommand required; run 'tailscale exit-node -h' for details")
},
func exitNodeCmd() *ffcli.Command {
return &ffcli.Command{
Name: "exit-node",
ShortUsage: "tailscale exit-node [flags]",
ShortHelp: "Show machines on your tailnet configured as exit nodes",
Subcommands: append([]*ffcli.Command{
{
Name: "list",
ShortUsage: "tailscale exit-node list [flags]",
ShortHelp: "Show exit nodes",
Exec: runExitNodeList,
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("list")
fs.StringVar(&exitNodeArgs.filter, "filter", "", "filter exit nodes by country")
return fs
})(),
},
{
Name: "suggest",
ShortUsage: "tailscale exit-node suggest",
ShortHelp: "Suggests the best available exit node",
Exec: runExitNodeSuggest,
}},
(func() []*ffcli.Command {
if !envknob.UseWIPCode() {
return nil
}
return []*ffcli.Command{
{
Name: "connect",
ShortUsage: "tailscale exit-node connect",
ShortHelp: "Connect to most recently used exit node",
Exec: exitNodeSetUse(true),
},
{
Name: "disconnect",
ShortUsage: "tailscale exit-node disconnect",
ShortHelp: "Disconnect from current exit node, if any",
Exec: exitNodeSetUse(false),
},
}
})()...),
}
}
var exitNodeArgs struct {
filter string
}
func exitNodeSetUse(wantOn bool) func(ctx context.Context, args []string) error {
return func(ctx context.Context, args []string) error {
if len(args) > 0 {
return errors.New("unexpected non-flag arguments")
}
err := localClient.SetUseExitNode(ctx, wantOn)
if err != nil {
if !wantOn {
pref, err := localClient.GetPrefs(ctx)
if err == nil && pref.ExitNodeID == "" {
// Two processes concurrently turned it off.
return nil
}
}
}
return err
}
}
// runExitNodeList returns a formatted list of exit nodes for a tailnet.
// If the exit node has location and priority data, only the highest
// priority node for each city location is shown to the user.
@ -70,7 +111,6 @@ func runExitNodeList(ctx context.Context, args []string) error {
// We only show exit nodes under the exit-node subcommand.
continue
}
peers = append(peers, ps)
}
@ -84,24 +124,49 @@ func runExitNodeList(ctx context.Context, args []string) error {
return fmt.Errorf("no exit nodes found for %q", exitNodeArgs.filter)
}
w := tabwriter.NewWriter(os.Stdout, 10, 5, 5, ' ', 0)
w := tabwriter.NewWriter(Stdout, 10, 5, 5, ' ', 0)
defer w.Flush()
fmt.Fprintf(w, "\n %s\t%s\t%s\t%s\t%s\t", "IP", "HOSTNAME", "COUNTRY", "CITY", "STATUS")
for _, country := range filteredPeers.Countries {
for _, city := range country.Cities {
for _, peer := range city.Peers {
fmt.Fprintf(w, "\n %s\t%s\t%s\t%s\t%s\t", peer.TailscaleIPs[0], strings.Trim(peer.DNSName, "."), country.Name, city.Name, peerStatus(peer))
}
}
}
fmt.Fprintln(w)
fmt.Fprintln(w)
fmt.Fprintln(w, "# To use an exit node, use `tailscale set --exit-node=` followed by the hostname or IP")
fmt.Fprintln(w, "# To use an exit node, use `tailscale set --exit-node=` followed by the hostname or IP.")
if hasAnyExitNodeSuggestions(peers) {
fmt.Fprintln(w, "# To have Tailscale suggest an exit node, use `tailscale exit-node suggest`.")
}
return nil
}
// runExitNodeSuggest returns a suggested exit node ID to connect to and shows the chosen exit node tailcfg.StableNodeID.
// If there are no derp based exit nodes to choose from or there is a failure in finding a suggestion, the command will return an error indicating so.
func runExitNodeSuggest(ctx context.Context, args []string) error {
res, err := localClient.SuggestExitNode(ctx)
if err != nil {
return fmt.Errorf("suggest exit node: %w", err)
}
if res.ID == "" {
fmt.Println("No exit node suggestion is available.")
return nil
}
fmt.Printf("Suggested exit node: %v\nTo accept this suggestion, use `tailscale set --exit-node=%v`.\n", res.Name, res.ID)
return nil
}
func hasAnyExitNodeSuggestions(peers []*ipnstate.PeerStatus) bool {
for _, peer := range peers {
if peer.HasCap(tailcfg.NodeAttrSuggestExitNode) {
return true
}
}
return false
}
// peerStatus returns a string representing the current state of
// a peer. If there is no notable state, a - is returned.
func peerStatus(peer *ipnstate.PeerStatus) string {
@ -137,46 +202,51 @@ type filteredCity struct {
const noLocationData = "-"
var noLocation = &tailcfg.Location{
Country: noLocationData,
CountryCode: noLocationData,
City: noLocationData,
CityCode: noLocationData,
}
// filterFormatAndSortExitNodes filters and sorts exit nodes into
// alphabetical order, by country, city and then by priority if
// present.
// If an exit node has location data, and the country has more than
// once city, an `Any` city is added to the country that contains the
// one city, an `Any` city is added to the country that contains the
// highest priority exit node within that country.
// For exit nodes without location data, their country fields are
// defined as '-' to indicate that the data is not available.
func filterFormatAndSortExitNodes(peers []*ipnstate.PeerStatus, filterBy string) filteredExitNodes {
// first get peers into some fixed order, as code below doesn't break ties
// and our input comes from a random range-over-map.
slices.SortFunc(peers, func(a, b *ipnstate.PeerStatus) int {
return strings.Compare(a.DNSName, b.DNSName)
})
countries := make(map[string]*filteredCountry)
cities := make(map[string]*filteredCity)
for _, ps := range peers {
if ps.Location == nil {
ps.Location = &tailcfg.Location{
Country: noLocationData,
CountryCode: noLocationData,
City: noLocationData,
CityCode: noLocationData,
}
}
loc := cmp.Or(ps.Location, noLocation)
if filterBy != "" && ps.Location.Country != filterBy {
if filterBy != "" && loc.Country != filterBy {
continue
}
co, coOK := countries[ps.Location.CountryCode]
if !coOK {
co, ok := countries[loc.CountryCode]
if !ok {
co = &filteredCountry{
Name: ps.Location.Country,
Name: loc.Country,
}
countries[ps.Location.CountryCode] = co
countries[loc.CountryCode] = co
}
ci, ciOK := cities[ps.Location.CityCode]
if !ciOK {
ci, ok := cities[loc.CityCode]
if !ok {
ci = &filteredCity{
Name: ps.Location.City,
Name: loc.City,
}
cities[ps.Location.CityCode] = ci
cities[loc.CityCode] = ci
co.Cities = append(co.Cities, ci)
}
ci.Peers = append(ci.Peers, ps)
@ -193,10 +263,10 @@ func filterFormatAndSortExitNodes(peers []*ipnstate.PeerStatus, filterBy string)
continue
}
var countryANYPeer []*ipnstate.PeerStatus
var countryAnyPeer []*ipnstate.PeerStatus
for _, city := range country.Cities {
sortPeersByPriority(city.Peers)
countryANYPeer = append(countryANYPeer, city.Peers...)
countryAnyPeer = append(countryAnyPeer, city.Peers...)
var reducedCityPeers []*ipnstate.PeerStatus
for i, peer := range city.Peers {
if i == 0 || peer.ExitNode {
@ -208,7 +278,7 @@ func filterFormatAndSortExitNodes(peers []*ipnstate.PeerStatus, filterBy string)
city.Peers = reducedCityPeers
}
sortByCityName(country.Cities)
sortPeersByPriority(countryANYPeer)
sortPeersByPriority(countryAnyPeer)
if len(country.Cities) > 1 {
// For countries with more than one city, we want to return the
@ -216,7 +286,7 @@ func filterFormatAndSortExitNodes(peers []*ipnstate.PeerStatus, filterBy string)
country.Cities = append([]*filteredCity{
{
Name: "Any",
Peers: []*ipnstate.PeerStatus{countryANYPeer[0]},
Peers: []*ipnstate.PeerStatus{countryAnyPeer[0]},
},
}, country.Cities...)
}

@ -0,0 +1,160 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19 && !ts_omit_completion
// Package ffcomplete provides shell tab-completion of subcommands, flags and
// arguments for Go programs written with [ffcli].
//
// The shell integration scripts have been extracted from Cobra
// (https://cobra.dev/), whose authors deserve most of the credit for this work.
// These shell completion functions invoke `$0 completion __complete -- ...`
// which is wired up to [Complete].
package ffcomplete
import (
"context"
"flag"
"fmt"
"io"
"log"
"os"
"strconv"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/cmd/tailscale/cli/ffcomplete/internal"
"tailscale.com/tempfork/spf13/cobra"
)
type compOpts struct {
showFlags bool
showDescs bool
}
func newFS(name string, opts *compOpts) *flag.FlagSet {
fs := flag.NewFlagSet(name, flag.ContinueOnError)
fs.BoolVar(&opts.showFlags, "flags", true, "Suggest flag completions with subcommands")
fs.BoolVar(&opts.showDescs, "descs", true, "Include flag, subcommand, and other descriptions in completions")
return fs
}
// Inject adds the 'completion' subcommand to the root command which provide the
// user with shell scripts for calling `completion __command` to provide
// tab-completion suggestions.
//
// root.Name needs to match the command that the user is tab-completing for the
// shell script to work as expected by default.
//
// The hide function is called with the __complete Command instance to provide a
// hook to omit it from the help output, if desired.
func Inject(root *ffcli.Command, hide func(*ffcli.Command), usageFunc func(*ffcli.Command) string) {
var opts compOpts
compFS := newFS("completion", &opts)
completeCmd := &ffcli.Command{
Name: "__complete",
ShortUsage: root.Name + " completion __complete -- <args to complete...>",
ShortHelp: "Tab-completion suggestions for interactive shells",
UsageFunc: usageFunc,
FlagSet: compFS,
Exec: func(ctx context.Context, args []string) error {
// Set up debug logging for the rest of this function call.
if t := os.Getenv("BASH_COMP_DEBUG_FILE"); t != "" {
tf, err := os.OpenFile(t, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o600)
if err != nil {
return fmt.Errorf("opening debug file: %w", err)
}
defer func(origW io.Writer, origPrefix string, origFlags int) {
log.SetOutput(origW)
log.SetFlags(origFlags)
log.SetPrefix(origPrefix)
tf.Close()
}(log.Writer(), log.Prefix(), log.Flags())
log.SetOutput(tf)
log.SetFlags(log.Lshortfile)
log.SetPrefix("debug: ")
}
// Send back the results to the shell.
words, dir, err := internal.Complete(root, args, opts.showFlags, opts.showDescs)
if err != nil {
dir = ShellCompDirectiveError
}
for _, word := range words {
fmt.Println(word)
}
fmt.Println(":" + strconv.Itoa(int(dir)))
return err
},
}
if hide != nil {
hide(completeCmd)
}
root.Subcommands = append(
root.Subcommands,
&ffcli.Command{
Name: "completion",
ShortUsage: root.Name + " completion <shell> [--flags] [--descs]",
ShortHelp: "Shell tab-completion scripts",
LongHelp: fmt.Sprintf(cobra.UsageTemplate, root.Name),
// Print help if run without args.
Exec: func(ctx context.Context, args []string) error { return flag.ErrHelp },
// Omit the '__complete' subcommand from the 'completion' help.
UsageFunc: func(c *ffcli.Command) string {
// Filter the subcommands to omit '__complete'.
s := make([]*ffcli.Command, 0, len(c.Subcommands))
for _, sub := range c.Subcommands {
if !strings.HasPrefix(sub.Name, "__") {
s = append(s, sub)
}
}
// Swap in the filtered subcommands list for the rest of the call.
defer func(r []*ffcli.Command) { c.Subcommands = r }(c.Subcommands)
c.Subcommands = s
// Render the usage.
if usageFunc == nil {
return ffcli.DefaultUsageFunc(c)
}
return usageFunc(c)
},
Subcommands: append(
scriptCmds(root, usageFunc),
completeCmd,
),
},
)
}
// Flag registers a completion function for the flag in fs with given name.
// comp will always called with a 1-element slice.
//
// comp will be called to return suggestions when the user tries to tab-complete
// '--name=<TAB>' or '--name <TAB>' for the commands using fs.
func Flag(fs *flag.FlagSet, name string, comp CompleteFunc) {
f := fs.Lookup(name)
if f == nil {
panic(fmt.Errorf("ffcomplete.Flag: flag %s not found", name))
}
if internal.CompleteFlags == nil {
internal.CompleteFlags = make(map[*flag.Flag]CompleteFunc)
}
internal.CompleteFlags[f] = comp
}
// Args registers a completion function for the args of cmd.
//
// comp will be called to return suggestions when the user tries to tab-complete
// `prog <TAB>` or `prog subcmd arg1 <TAB>`, for example.
func Args(cmd *ffcli.Command, comp CompleteFunc) {
if internal.CompleteCmds == nil {
internal.CompleteCmds = make(map[*ffcli.Command]CompleteFunc)
}
internal.CompleteCmds[cmd] = comp
}

@ -0,0 +1,17 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19 && ts_omit_completion
package ffcomplete
import (
"flag"
"github.com/peterbourgon/ff/v3/ffcli"
)
func Inject(root *ffcli.Command, hide func(*ffcli.Command), usageFunc func(*ffcli.Command) string) {}
func Flag(fs *flag.FlagSet, name string, comp CompleteFunc) {}
func Args(cmd *ffcli.Command, comp CompleteFunc) *ffcli.Command { return cmd }

@ -0,0 +1,60 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package ffcomplete
import (
"strings"
"tailscale.com/cmd/tailscale/cli/ffcomplete/internal"
"tailscale.com/tempfork/spf13/cobra"
)
type ShellCompDirective = cobra.ShellCompDirective
const (
ShellCompDirectiveError = cobra.ShellCompDirectiveError
ShellCompDirectiveNoSpace = cobra.ShellCompDirectiveNoSpace
ShellCompDirectiveNoFileComp = cobra.ShellCompDirectiveNoFileComp
ShellCompDirectiveFilterFileExt = cobra.ShellCompDirectiveFilterFileExt
ShellCompDirectiveFilterDirs = cobra.ShellCompDirectiveFilterDirs
ShellCompDirectiveKeepOrder = cobra.ShellCompDirectiveKeepOrder
ShellCompDirectiveDefault = cobra.ShellCompDirectiveDefault
)
// CompleteFunc is used to return tab-completion suggestions to the user as they
// are typing command-line instructions. It returns the list of things to
// suggest and an additional directive to the shell about what extra
// functionality to enable.
type CompleteFunc = internal.CompleteFunc
// LastArg returns the last element of args, or the empty string if args is
// empty.
func LastArg(args []string) string {
if len(args) == 0 {
return ""
}
return args[len(args)-1]
}
// Fixed returns a CompleteFunc which suggests the given words.
func Fixed(words ...string) CompleteFunc {
return func(args []string) ([]string, cobra.ShellCompDirective, error) {
match := LastArg(args)
matches := make([]string, 0, len(words))
for _, word := range words {
if strings.HasPrefix(word, match) {
matches = append(matches, word)
}
}
return matches, cobra.ShellCompDirectiveNoFileComp, nil
}
}
// FilesWithExtensions returns a CompleteFunc that tells the shell to limit file
// suggestions to those with the given extensions.
func FilesWithExtensions(exts ...string) CompleteFunc {
return func(args []string) ([]string, cobra.ShellCompDirective, error) {
return exts, cobra.ShellCompDirectiveFilterFileExt, nil
}
}

@ -0,0 +1,270 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package internal
import (
"flag"
"fmt"
"strings"
"github.com/peterbourgon/ff/v3"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/tempfork/spf13/cobra"
)
var (
CompleteCmds map[*ffcli.Command]CompleteFunc
CompleteFlags map[*flag.Flag]CompleteFunc
)
type CompleteFunc func([]string) ([]string, cobra.ShellCompDirective, error)
// Complete returns the autocomplete suggestions for the root program and args.
//
// The returned words do not necessarily need to be prefixed with the last arg
// which is being completed. For example, '--bool-flag=' will have completions
// 'true' and 'false'.
//
// "HIDDEN: " is trimmed from the start of Flag Usage's.
func Complete(root *ffcli.Command, args []string, startFlags, descs bool) (words []string, dir cobra.ShellCompDirective, err error) {
// Explicitly log panics.
defer func() {
if r := recover(); r != nil {
if rerr, ok := err.(error); ok {
err = fmt.Errorf("panic: %w", rerr)
} else {
err = fmt.Errorf("panic: %v", r)
}
}
}()
// Set up the arguments.
if len(args) == 0 {
args = []string{""}
}
// Completion criteria.
completeArg := args[len(args)-1]
args = args[:len(args)-1]
emitFlag := startFlags || strings.HasPrefix(completeArg, "-")
emitArgs := true
// Traverse the command-tree to find the cmd command whose
// subcommand, flags, or arguments are being completed.
cmd := root
walk:
for {
// Ensure there's a flagset with ContinueOnError set.
if cmd.FlagSet == nil {
cmd.FlagSet = flag.NewFlagSet(cmd.Name, flag.ContinueOnError)
}
cmd.FlagSet.Init(cmd.FlagSet.Name(), flag.ContinueOnError)
// Manually split the args so we know when we're completing flags/args.
flagArgs, argArgs, flagNeedingValue := splitFlagArgs(cmd.FlagSet, args)
if flagNeedingValue != "" {
completeArg = flagNeedingValue + "=" + completeArg
emitFlag = true
}
args = argArgs
// Parse the flags.
err := ff.Parse(cmd.FlagSet, flagArgs, cmd.Options...)
if err != nil {
return nil, 0, fmt.Errorf("%s flag parsing: %w", cmd.Name, err)
}
if cmd.FlagSet.NArg() > 0 {
// This shouldn't happen if splitFlagArgs is accurately finding the
// split between flags and args.
_ = false
}
if len(args) == 0 {
break
}
// Check if the first argument is actually a subcommand.
for _, sub := range cmd.Subcommands {
if strings.EqualFold(sub.Name, args[0]) {
args = args[1:]
cmd = sub
continue walk
}
}
break
}
if len(args) > 0 {
emitFlag = false
}
// Complete '-flag=...'. If the args ended with '-flag ...' we will have
// rewritten to '-flag=...' by now.
if emitFlag && strings.HasPrefix(completeArg, "-") && strings.Contains(completeArg, "=") {
// Don't complete '-flag' later on as the
// flag name is terminated by a '='.
emitFlag = false
emitArgs = false
dashFlag, completeVal, _ := strings.Cut(completeArg, "=")
_, f := cutDash(dashFlag)
flag := cmd.FlagSet.Lookup(f)
if flag != nil {
if comp := CompleteFlags[flag]; comp != nil {
// Complete custom flag values.
var err error
words, dir, err = comp([]string{completeVal})
if err != nil {
return nil, 0, fmt.Errorf("completing %s flag %s: %w", cmd.Name, flag.Name, err)
}
} else if isBoolFlag(flag) {
// Complete true/false.
for _, vals := range [][]string{
{"true", "TRUE", "True", "1"},
{"false", "FALSE", "False", "0"},
} {
for _, val := range vals {
if strings.HasPrefix(val, completeVal) {
words = append(words, val)
break
}
}
}
}
}
}
// Complete '-flag...'.
if emitFlag {
used := make(map[string]struct{})
cmd.FlagSet.Visit(func(f *flag.Flag) {
used[f.Name] = struct{}{}
})
cd, cf := cutDash(completeArg)
cmd.FlagSet.VisitAll(func(f *flag.Flag) {
if !strings.HasPrefix(f.Name, cf) {
return
}
// Skip flags already set by the user.
if _, seen := used[f.Name]; seen {
return
}
// Suggest single-dash '-v' for single-char flags and
// double-dash '--verbose' for longer.
d := cd
if (d == "" || d == "-") && cf == "" && len(f.Name) > 1 {
d = "--"
}
if descs {
_, usage := flag.UnquoteUsage(f)
usage = strings.TrimPrefix(usage, "HIDDEN: ")
if usage != "" {
words = append(words, d+f.Name+"\t"+usage)
return
}
}
words = append(words, d+f.Name)
})
}
if emitArgs {
// Complete 'sub...'.
for _, sub := range cmd.Subcommands {
if strings.HasPrefix(sub.Name, completeArg) {
if descs {
if sub.ShortHelp != "" {
words = append(words, sub.Name+"\t"+sub.ShortHelp)
continue
}
}
words = append(words, sub.Name)
}
}
// Complete custom args.
if comp := CompleteCmds[cmd]; comp != nil {
w, d, err := comp(append(args, completeArg))
if err != nil {
return nil, 0, fmt.Errorf("completing %s args: %w", cmd.Name, err)
}
dir = d
words = append(words, w...)
}
}
// Strip any descriptions if they were suppressed.
clean := words[:0]
for _, w := range words {
if !descs {
w, _, _ = strings.Cut(w, "\t")
}
w = cutAny(w, "\n\r")
if w == "" || w[0] == '\t' {
continue
}
clean = append(clean, w)
}
return clean, dir, nil
}
func cutAny(s, cutset string) string {
i := strings.IndexAny(s, cutset)
if i == -1 {
return s
}
return s[:i]
}
// splitFlagArgs separates a list of command-line arguments into arguments
// comprising flags and their values, preceding arguments to be passed to the
// command. This follows the stdlib 'flag' parsing conventions. If the final
// argument is a flag name which takes a value but has no value specified, it is
// omitted from flagArgs and argArgs and instead returned in needValue.
func splitFlagArgs(fs *flag.FlagSet, args []string) (flagArgs, argArgs []string, flagNeedingValue string) {
for i := 0; i < len(args); i++ {
a := args[i]
if a == "--" {
return args[:i], args[i+1:], ""
}
d, f := cutDash(a)
if d == "" {
return args[:i], args[i:], ""
}
if strings.Contains(f, "=") {
continue
}
flag := fs.Lookup(f)
if flag == nil {
return args[:i], args[i:], ""
}
if isBoolFlag(flag) {
continue
}
// Consume an extra argument for the flag value.
if i == len(args)-1 {
return args[:i], nil, args[i]
}
i++
}
return args, nil, ""
}
func cutDash(s string) (dashes, flag string) {
if strings.HasPrefix(s, "-") {
if strings.HasPrefix(s[1:], "-") {
return "--", s[2:]
}
return "-", s[1:]
}
return "", s
}
func isBoolFlag(f *flag.Flag) bool {
bf, ok := f.Value.(interface {
IsBoolFlag() bool
})
return ok && bf.IsBoolFlag()
}

@ -0,0 +1,225 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package internal_test
import (
_ "embed"
"flag"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/cmd/tailscale/cli/ffcomplete"
"tailscale.com/cmd/tailscale/cli/ffcomplete/internal"
)
func newFlagSet(name string, errh flag.ErrorHandling, flags func(fs *flag.FlagSet)) *flag.FlagSet {
fs := flag.NewFlagSet(name, errh)
if flags != nil {
flags(fs)
}
return fs
}
func TestComplete(t *testing.T) {
t.Parallel()
// Build our test program in testdata.
root := &ffcli.Command{
Name: "prog",
FlagSet: newFlagSet("prog", flag.ContinueOnError, func(fs *flag.FlagSet) {
fs.Bool("v", false, "verbose")
fs.Bool("root-bool", false, "root `bool`")
fs.String("root-str", "", "some `text`")
}),
Subcommands: []*ffcli.Command{
{
Name: "debug",
ShortHelp: "Debug data",
FlagSet: newFlagSet("prog debug", flag.ExitOnError, func(fs *flag.FlagSet) {
fs.String("cpu-profile", "", "write cpu profile to `file`")
fs.Bool("debug-bool", false, "debug bool")
fs.Int("level", 0, "a number")
fs.String("enum", "", "a flag that takes several specific values")
ffcomplete.Flag(fs, "enum", ffcomplete.Fixed("alpha", "beta", "charlie"))
}),
},
func() *ffcli.Command {
cmd := &ffcli.Command{
Name: "ping",
FlagSet: newFlagSet("prog ping", flag.ContinueOnError, func(fs *flag.FlagSet) {
fs.String("until", "", "when pinging should end\nline break!")
ffcomplete.Flag(fs, "until", ffcomplete.Fixed("forever", "direct"))
}),
}
ffcomplete.Args(cmd, ffcomplete.Fixed(
"jupiter\t5th planet\nand largets",
"neptune\t8th planet",
"venus\t2nd planet",
"\tonly description",
"\nonly line break",
))
return cmd
}(),
},
}
tests := []struct {
args []string
showFlags bool
showDescs bool
wantComp []string
wantDir ffcomplete.ShellCompDirective
}{
{
args: []string{"deb"},
wantComp: []string{"debug"},
},
{
args: []string{"deb"},
showDescs: true,
wantComp: []string{"debug\tDebug data"},
},
{
args: []string{"-"},
wantComp: []string{"--root-bool", "--root-str", "-v"},
},
{
args: []string{"--"},
wantComp: []string{"--root-bool", "--root-str", "--v"},
},
{
args: []string{"-r"},
wantComp: []string{"-root-bool", "-root-str"},
},
{
args: []string{"--r"},
wantComp: []string{"--root-bool", "--root-str"},
},
{
args: []string{"--root-str=s", "--r"},
wantComp: []string{"--root-bool"}, // omits --root-str which is already set
},
{
// '--' disables flag parsing, so we shouldn't suggest flags.
args: []string{"--", "--root"},
wantComp: nil,
},
{
// '--' is used as the value of '--root-str'.
args: []string{"--root-str", "--", "--r"},
wantComp: []string{"--root-bool"},
},
{
// '--' here is a flag value, so doesn't disable flag parsing.
args: []string{"--root-str", "--", "--root"},
wantComp: []string{"--root-bool"},
},
{
// Equivalent to '--root-str=-- -- --r' meaning '--r' is not
// a flag because it's preceded by a '--' argument:
// https://go.dev/play/p/UCtftQqVhOD.
args: []string{"--root-str", "--", "--", "--r"},
wantComp: nil,
},
{
args: []string{"--root-bool="},
wantComp: []string{"true", "false"},
},
{
args: []string{"--root-bool=t"},
wantComp: []string{"true"},
},
{
args: []string{"--root-bool=T"},
wantComp: []string{"TRUE"},
},
{
args: []string{"debug", "--de"},
wantComp: []string{"--debug-bool"},
},
{
args: []string{"debug", "--enum="},
wantComp: []string{"alpha", "beta", "charlie"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"debug", "--enum=al"},
wantComp: []string{"alpha"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"debug", "--level", ""},
wantComp: nil,
},
{
args: []string{"debug", "--enum", "b"},
wantComp: []string{"beta"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"debug", "--enum", "al"},
wantComp: []string{"alpha"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"ping", ""},
showFlags: true,
wantComp: []string{"--until", "jupiter", "neptune", "venus"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"ping", ""},
showFlags: true,
showDescs: true,
wantComp: []string{
"--until\twhen pinging should end",
"jupiter\t5th planet",
"neptune\t8th planet",
"venus\t2nd planet",
},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"ping", ""},
wantComp: []string{"jupiter", "neptune", "venus"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
{
args: []string{"ping", "j"},
wantComp: []string{"jupiter"},
wantDir: ffcomplete.ShellCompDirectiveNoFileComp,
},
}
// Run the tests.
for _, test := range tests {
test := test
name := strings.Join(test.args, "␣")
if test.showFlags {
name += "+flags"
}
if test.showDescs {
name += "+descs"
}
t.Run(name, func(t *testing.T) {
// Capture the binary
complete, dir, err := internal.Complete(root, test.args, test.showFlags, test.showDescs)
if err != nil {
t.Fatalf("completion error: %s", err)
}
// Test the results match our expectation.
if test.wantComp != nil {
if diff := cmp.Diff(test.wantComp, complete); diff != "" {
t.Errorf("unexpected completion directives (-want +got):\n%s", diff)
}
}
if test.wantDir != dir {
t.Errorf("got shell completion directive %[1]d (%[1]s), want %[2]d (%[2]s)", dir, test.wantDir)
}
})
}
}

@ -0,0 +1,85 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19 && !ts_omit_completion && !ts_omit_completion_scripts
package ffcomplete
import (
"context"
"flag"
"os"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/tempfork/spf13/cobra"
)
func compCmd(fs *flag.FlagSet) string {
var s strings.Builder
s.WriteString("completion __complete")
fs.VisitAll(func(f *flag.Flag) {
s.WriteString(" --")
s.WriteString(f.Name)
s.WriteString("=")
s.WriteString(f.Value.String())
})
s.WriteString(" --")
return s.String()
}
func scriptCmds(root *ffcli.Command, usageFunc func(*ffcli.Command) string) []*ffcli.Command {
nameForVar := root.Name
nameForVar = strings.ReplaceAll(nameForVar, "-", "_")
nameForVar = strings.ReplaceAll(nameForVar, ":", "_")
var (
bashFS = newFS("bash", &compOpts{})
zshFS = newFS("zsh", &compOpts{})
fishFS = newFS("fish", &compOpts{})
pwshFS = newFS("powershell", &compOpts{})
)
return []*ffcli.Command{
{
Name: "bash",
ShortHelp: "Generate bash shell completion script",
ShortUsage: ". <( " + root.Name + " completion bash )",
UsageFunc: usageFunc,
FlagSet: bashFS,
Exec: func(ctx context.Context, args []string) error {
return cobra.ScriptBash(os.Stdout, root.Name, compCmd(bashFS), nameForVar)
},
},
{
Name: "zsh",
ShortHelp: "Generate zsh shell completion script",
ShortUsage: ". <( " + root.Name + " completion zsh )",
UsageFunc: usageFunc,
FlagSet: zshFS,
Exec: func(ctx context.Context, args []string) error {
return cobra.ScriptZsh(os.Stdout, root.Name, compCmd(zshFS), nameForVar)
},
},
{
Name: "fish",
ShortHelp: "Generate fish shell completion script",
ShortUsage: root.Name + " completion fish | source",
UsageFunc: usageFunc,
FlagSet: fishFS,
Exec: func(ctx context.Context, args []string) error {
return cobra.ScriptFish(os.Stdout, root.Name, compCmd(fishFS), nameForVar)
},
},
{
Name: "powershell",
ShortHelp: "Generate powershell completion script",
ShortUsage: root.Name + " completion powershell | Out-String | Invoke-Expression",
UsageFunc: usageFunc,
FlagSet: pwshFS,
Exec: func(ctx context.Context, args []string) error {
return cobra.ScriptPowershell(os.Stdout, root.Name, compCmd(pwshFS), nameForVar)
},
},
}
}

@ -0,0 +1,12 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19 && !ts_omit_completion && ts_omit_completion_scripts
package ffcomplete
import "github.com/peterbourgon/ff/v3/ffcli"
func scriptCmds(root *ffcli.Command, usageFunc func(*ffcli.Command) string) []*ffcli.Command {
return nil
}

@ -26,6 +26,7 @@ import (
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/time/rate"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/cmd/tailscale/cli/ffcomplete"
"tailscale.com/envknob"
"tailscale.com/net/tsaddr"
"tailscale.com/syncs"
@ -38,18 +39,12 @@ import (
var fileCmd = &ffcli.Command{
Name: "file",
ShortUsage: "file <cp|get> ...",
ShortUsage: "tailscale file <cp|get> ...",
ShortHelp: "Send or receive files",
Subcommands: []*ffcli.Command{
fileCpCmd,
fileGetCmd,
},
Exec: func(context.Context, []string) error {
// TODO(bradfitz): is there a better ffcli way to
// annotate subcommand-required commands that don't
// have an exec body of their own?
return errors.New("file subcommand required; run 'tailscale file -h' for details")
},
}
type countingReader struct {
@ -65,7 +60,7 @@ func (c *countingReader) Read(buf []byte) (int, error) {
var fileCpCmd = &ffcli.Command{
Name: "cp",
ShortUsage: "file cp <files...> <target>:",
ShortUsage: "tailscale file cp <files...> <target>:",
ShortHelp: "Copy file(s) to a host",
Exec: runCp,
FlagSet: (func() *flag.FlagSet {
@ -412,7 +407,7 @@ func (v *onConflict) Set(s string) error {
var fileGetCmd = &ffcli.Command{
Name: "get",
ShortUsage: "file get [--wait] [--verbose] [--conflict=(skip|overwrite|rename)] <target-directory>",
ShortUsage: "tailscale file get [--wait] [--verbose] [--conflict=(skip|overwrite|rename)] <target-directory>",
ShortHelp: "Move files out of the Tailscale file inbox",
Exec: runFileGet,
FlagSet: (func() *flag.FlagSet {
@ -420,10 +415,11 @@ var fileGetCmd = &ffcli.Command{
fs.BoolVar(&getArgs.wait, "wait", false, "wait for a file to arrive if inbox is empty")
fs.BoolVar(&getArgs.loop, "loop", false, "run get in a loop, receiving files as they come in")
fs.BoolVar(&getArgs.verbose, "verbose", false, "verbose output")
fs.Var(&getArgs.conflict, "conflict", `behavior when a conflicting (same-named) file already exists in the target directory.
fs.Var(&getArgs.conflict, "conflict", "`behavior`"+` when a conflicting (same-named) file already exists in the target directory.
skip: skip conflicting files: leave them in the taildrop inbox and print an error. get any non-conflicting files
overwrite: overwrite existing file
rename: write to a new number-suffixed filename`)
ffcomplete.Flag(fs, "conflict", ffcomplete.Fixed("skip", "overwrite", "rename"))
return fs
})(),
}
@ -560,7 +556,7 @@ func runFileGetOneBatch(ctx context.Context, dir string) []error {
func runFileGet(ctx context.Context, args []string) error {
if len(args) != 1 {
return errors.New("usage: file get <target-directory>")
return errors.New("usage: tailscale file get <target-directory>")
}
log.SetFlags(0)

@ -8,7 +8,6 @@ import (
"flag"
"fmt"
"net"
"os"
"strconv"
"strings"
@ -37,9 +36,9 @@ func newFunnelCommand(e *serveEnv) *ffcli.Command {
Name: "funnel",
ShortHelp: "Turn on/off Funnel service",
ShortUsage: strings.Join([]string{
"funnel <serve-port> {on|off}",
"funnel status [--json]",
}, "\n "),
"tailscale funnel <serve-port> {on|off}",
"tailscale funnel status [--json]",
}, "\n"),
LongHelp: strings.Join([]string{
"Funnel allows you to publish a 'tailscale serve'",
"server publicly, open to the entire internet.",
@ -47,17 +46,16 @@ func newFunnelCommand(e *serveEnv) *ffcli.Command {
"Turning off Funnel only turns off serving to the internet.",
"It does not affect serving to your tailnet.",
}, "\n"),
Exec: e.runFunnel,
UsageFunc: usageFunc,
Exec: e.runFunnel,
Subcommands: []*ffcli.Command{
{
Name: "status",
Exec: e.runServeStatus,
ShortHelp: "show current serve/funnel status",
Name: "status",
Exec: e.runServeStatus,
ShortUsage: "tailscale funnel status [--json]",
ShortHelp: "Show current serve/funnel status",
FlagSet: e.newFlags("funnel-status", func(fs *flag.FlagSet) {
fs.BoolVar(&e.json, "json", false, "output JSON")
}),
UsageFunc: usageFunc,
},
},
}
@ -169,10 +167,10 @@ func printFunnelWarning(sc *ipn.ServeConfig) {
p, _ := strconv.ParseUint(portStr, 10, 16)
if _, ok := sc.TCP[uint16(p)]; !ok {
warn = true
fmt.Fprintf(os.Stderr, "\nWarning: funnel=on for %s, but no serve config\n", hp)
fmt.Fprintf(Stderr, "\nWarning: funnel=on for %s, but no serve config\n", hp)
}
}
if warn {
fmt.Fprintf(os.Stderr, " run: `tailscale serve --help` to see how to configure handlers\n")
fmt.Fprintf(Stderr, " run: `tailscale serve --help` to see how to configure handlers\n")
}
}

@ -8,18 +8,23 @@ import (
"errors"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/envknob"
)
var idTokenCmd = &ffcli.Command{
Name: "id-token",
ShortUsage: "id-token <aud>",
ShortHelp: "fetch an OIDC id-token for the Tailscale machine",
ShortUsage: "tailscale id-token <aud>",
ShortHelp: "Fetch an OIDC id-token for the Tailscale machine",
LongHelp: hidden,
Exec: runIDToken,
}
func runIDToken(ctx context.Context, args []string) error {
if !envknob.UseWIPCode() {
return errors.New("tailscale id-token: works-in-progress require TAILSCALE_USE_WIP_CODE=1 envvar")
}
if len(args) != 1 {
return errors.New("usage: id-token <aud>")
return errors.New("usage: tailscale id-token <aud>")
}
tr, err := localClient.IDToken(ctx, args[0])

@ -16,7 +16,7 @@ import (
var ipCmd = &ffcli.Command{
Name: "ip",
ShortUsage: "ip [-1] [-4] [-6] [peer hostname or ip address]",
ShortUsage: "tailscale ip [-1] [-4] [-6] [peer hostname or ip address]",
ShortHelp: "Show Tailscale IP addresses",
LongHelp: "Show Tailscale IP addresses for peer. Peer defaults to the current machine.",
Exec: runIP,

@ -12,7 +12,7 @@ import (
var licensesCmd = &ffcli.Command{
Name: "licenses",
ShortUsage: "licenses",
ShortUsage: "tailscale licenses",
ShortHelp: "Get open source license information",
LongHelp: "Get open source license information",
Exec: runLicenses,

@ -14,11 +14,10 @@ var loginArgs upArgsT
var loginCmd = &ffcli.Command{
Name: "login",
ShortUsage: "login [flags]",
ShortUsage: "tailscale login [flags]",
ShortHelp: "Log in to a Tailscale account",
LongHelp: `"tailscale login" logs this machine in to your Tailscale network.
This command is currently in alpha and may change in the future.`,
UsageFunc: usageFunc,
FlagSet: func() *flag.FlagSet {
return newUpFlagSet(effectiveGOOS(), &loginArgs, "login")
}(),

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save