Compare commits

...

195 Commits

Author SHA1 Message Date
Tom Proctor f8cd07fb8a .github: make cigocacher script more robust
We got a flake in https://github.com/tailscale/tailscale/actions/runs/19867229792/job/56933249360
but it's not obvious to me where it failed. Make it more robust and
print out more useful error messages for next time.

Updates tailscale/corp#10808

Change-Id: I9ca08ea1103b9ad968c9cc0c42a493981ea62435
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
24 hours ago
Brad Fitzpatrick b8c58ca7c1 wgengine: fix TSMP/ICMP callback leak
Fixes #18112

Change-Id: I85d5c482b01673799d51faeb6cb0579903597502
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 day ago
Gesa Stupperich 536188c1b5 tsnet: enable node registration via federated identity
Updates: tailscale.com/corp#34148

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
1 day ago
Joe Tsai 957a443b23
cmd/netlogfmt: allow empty --resolve-addrs flag (#18103)
Updates tailscale/corp#33352

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 day ago
Raj Singh bd5c50909f
scripts/installer: add TAILSCALE_VERSION environment variable (#18014)
Add support for pinning specific Tailscale versions during installation
via the TAILSCALE_VERSION environment variable.

Example usage:
  curl -fsSL https://tailscale.com/install.sh | TAILSCALE_VERSION=1.88.4 sh

Fixes #17776

Signed-off-by: Raj Singh <raj@tailscale.com>
1 day ago
Tom Proctor 22a815b6d2 tool: bump binaryen wasm optimiser version 111 -> 125
111 is 3 years old, and there have been a lot of speed improvements
since then. We run wasm-opt twice as part of the CI wasm job, and it
currently takes about 3 minutes each time. With 125, it takes ~40
seconds, a 4.5x speed-up.

Updates #cleanup

Change-Id: I671ae6cefa3997a23cdcab6871896b6b03e83a4f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 day ago
License Updater 8976b34cb8 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
1 day ago
Naasir 77dcdc223e cleanup: fix typos across multiple files
Does not affect code.

Updates #cleanup

Signed-off-by: Naasir <yoursdeveloper@protonmail.com>
1 day ago
Tom Proctor ece6e27f39 .github,cmd/cigocacher: use cigocacher for windows
Implements a new disk put function for cigocacher that does not cause
locking issues on Windows when there are multiple processes reading and
writing the same files concurrently. Integrates cigocacher into test.yml
for Windows where we are running on larger runners that support
connecting to private Azure vnet resources where cigocached is hosted.

Updates tailscale/corp#10808

Change-Id: I0d0e9b670e49e0f9abf01ff3d605cd660dd85ebb
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 day ago
Tom Proctor 97f1fd6d48 .github: only save cache on main
The cache artifacts from a full run of test.yml are 14GB. Only save
artifacts from the main branch to ensure we don't thrash too much. Most
branches should get decent performance with a hit from recent main.

Fixes tailscale/corp#34739

Change-Id: Ia83269d878e4781e3ddf33f1db2f21d06ea2130f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 day ago
Shaikh Naasir 37b4dd047f
k8s-operator: Fix typos in egress-pod-readiness.go
Updates #cleanup

Signed-off-by: Alex Chan <alexc@tailscale.com>
2 days ago
Alex Chan bd12d8f12f cmd/tailscale/cli: soften the warning on `--force-reauth` for seamless
Thanks to seamless key renewal, you can now do a force-reauth without
losing your connection in all circumstances. We softened the interactive
warning (see #17262) so let's soften the help text as well.

Updates https://github.com/tailscale/corp/issues/32429

Signed-off-by: Alex Chan <alexc@tailscale.com>
2 days ago
Anton Tolchanov 34dff57137 feature/posture: log method and full URL for posture identity requests
Updates tailscale/corp#34676

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 days ago
Fernando Serboncini f36eb81e61
cmd/k8s-operator fix populateTLSSecret on tests (#18088)
The call for populateTLSSecret was broken between PRs.

Updates #cleanup

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
5 days ago
Fernando Serboncini 7c5c02b77a
cmd/k8s-operator: add support for taiscale.com/http-redirect (#17596)
* cmd/k8s-operator: add support for taiscale.com/http-redirect

The k8s-operator now supports a tailscale.com/http-redirect annotation
on Ingress resources. When enabled, this automatically creates port 80
handlers that automatically redirect to the equivalent HTTPS location.

Fixes #11252

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* Fix for permanent redirect

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* lint

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* warn for redirect+endpoint

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* tests

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

---------

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
5 days ago
Mario Minardi 411cee0dc9 .github/workflows: only run golang ci lint when go files have changed
Restrict running the golangci-lint workflow to when the workflow file
itself or a .go file, go.mod, or go.sum have actually been modified.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
6 days ago
dependabot[bot] b40272e767 build(deps): bump braces from 3.0.2 to 3.0.3 in /client/web
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-version: 3.0.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
6 days ago
dependabot[bot] 22bdf34a00 build(deps): bump cross-spawn from 7.0.3 to 7.0.6 in /client/web
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from 7.0.3 to 7.0.6.
- [Changelog](https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.6)

---
updated-dependencies:
- dependency-name: cross-spawn
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
6 days ago
dependabot[bot] c0c0d45114 build(deps-dev): bump vitest from 1.3.1 to 1.6.1 in /client/web
Bumps [vitest](https://github.com/vitest-dev/vitest/tree/HEAD/packages/vitest) from 1.3.1 to 1.6.1.
- [Release notes](https://github.com/vitest-dev/vitest/releases)
- [Commits](https://github.com/vitest-dev/vitest/commits/v1.6.1/packages/vitest)

---
updated-dependencies:
- dependency-name: vitest
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
6 days ago
dependabot[bot] 3e2476ec13 build(deps-dev): bump vite from 5.1.7 to 5.4.21 in /client/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.1.7 to 5.4.21.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.21/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.21/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.21
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
6 days ago
dependabot[bot] 9500689bc1 build(deps): bump js-yaml from 4.1.0 to 4.1.1 in /client/web
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 4.1.0 to 4.1.1.
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 4.1.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
6 days ago
Mario Minardi 9cc07bf9c0 .github/workflows: skip draft PRs for request review workflows
Skip the "request review" workflows for PRs that are in draft to reduce
noise / skip adding reviewers to PRs that are intentionally marked as
not ready to review.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
7 days ago
Brad Fitzpatrick 74ed589042 syncs: add means of declare locking assumptions for debug mode validation
Updates #17852

Change-Id: I42a64a990dcc8f708fa23a516a40731a19967aba
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
7 days ago
Jonathan Nobels 3f9f0ed93c
VERSION.txt: this is v1.93.0 (#18074)
Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
7 days ago
James Tucker 5ee0c6bf1d derp/derpserver: add a unique sender cardinality estimate
Adds an observation point that may identify potentially abusive traffic
patterns at outlier values.

Updates tailscale/corp#24681

Signed-off-by: James Tucker <james@tailscale.com>
7 days ago
Andrew Lytvynov 9eff8a4503
feature/tpm: return opening errors from both /dev/tpmrm0 and /dev/tpm0 (#18071)
This might help users diagnose why TPM access is failing for tpmrm0.

Fixes #18026

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
1 week ago
Brad Fitzpatrick 8af7778ce0 util/execqueue: don't hold mutex in RunSync
We don't hold q.mu while running normal ExecQueue.Add funcs, so we
shouldn't in RunSync either. Otherwise code it calls can't shut down
the queue, as seen in #18502.

Updates #18052

Co-authored-by: Nick Khyl <nickk@tailscale.com>
Change-Id: Ic5e53440411eca5e9fabac7f4a68a9f6ef026de1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Alex Chan b7658a4ad2 tstest/integration: add integration test for Tailnet Lock
This patch adds an integration test for Tailnet Lock, checking that a node can't
talk to peers in the tailnet until it becomes signed.

This patch also introduces a new package `tstest/tkatest`, which has some helpers
for constructing a mock control server that responds to TKA requests. This allows
us to reduce boilerplate in the IPN tests.

Updates tailscale/corp#33599

Signed-off-by: Alex Chan <alexc@tailscale.com>
1 week ago
Jordan Whited 824027305a cmd/tailscale/cli,ipn,all: make peer relay server port a *uint16
In preparation for exposing its configuration via ipn.ConfigVAlpha,
change {Masked}Prefs.RelayServerPort from *int to *uint16. This takes a
defensive stance against invalid inputs at JSON decode time.

'tailscale set --relay-server-port' is currently the only input to this
pref, and has always sanitized input to fit within a uint16.

Updates tailscale/corp#34591

Signed-off-by: Jordan Whited <jordan@tailscale.com>
1 week ago
Sachin Iyer 53476ce872 ipn/serve: validate service paths in HasPathHandler
Fixes #17839

Signed-off-by: Sachin Iyer <siyer@detail.dev>
1 week ago
Claus Lensbøl c54d243690
net/tstun: add TSMPDiscoAdvertisement to TSMPPing (#17995)
Adds a new types of TSMP messages for advertising disco keys keys
to/from a peer, and implements the advertising triggered by a TSMP ping.

Needed as part of the effort to cache the netmap and still let clients
connect without control being reachable.

Updates #12639

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
Co-authored-by: James Tucker <james@tailscale.com>
1 week ago
Alex Chan b38dd1ae06 ipn/ipnlocal: don't panic if there are no suitable exit nodes
In suggestExitNodeLocked, if no exit node candidates have a home DERP or
valid location info, `bestCandidates` is an empty slice. This slice is
passed to `selectNode` (`randomNode` in prod):

```go func randomNode(nodes views.Slice[tailcfg.NodeView], …) tailcfg.NodeView {
	…
	return nodes.At(rand.IntN(nodes.Len()))
}
```

An empty slice becomes a call to `rand.IntN(0)`, which panics.

This patch changes the behaviour, so if we've filtered out all the
candidates before calling `selectNode`, reset the list and then pick
from any of the available candidates.

This patch also updates our tests to give us more coverage of `randomNode`,
so we can spot other potential issues.

Updates #17661

Change-Id: I63eb5e4494d45a1df5b1f4b1b5c6d5576322aa72
Signed-off-by: Alex Chan <alexc@tailscale.com>
1 week ago
Fran Bull f4a4bab105 tsconsensus: skip integration tests in CI
There is an issue to add non-integration tests: #18022

Fixes #15627 #16340

Signed-off-by: Fran Bull <fran@tailscale.com>
1 week ago
Brad Fitzpatrick ac0b15356d tailcfg, control/controlclient: start moving MapResponse.DefaultAutoUpdate to a nodeattr
And fix up the TestAutoUpdateDefaults integration tests as they
weren't testing reality: the DefaultAutoUpdate is supposed to only be
relevant on the first MapResponse in the stream, but the tests weren't
testing that. They were instead injecting a 2nd+ MapResponse.

This changes the test control server to add a hook to modify the first
map response, and then makes the test control when the node goes up
and down to make new map responses.

Also, the test now runs on macOS where the auto-update feature being
disabled would've previously t.Skipped the whole test.

Updates #11502

Change-Id: If2319bd1f71e108b57d79fe500b2acedbc76e1a6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 week ago
Simon Law 848978e664
ipn/ipnlocal: test traffic-steering when feature is not enabled (#17997)
In PR tailscale/corp#34401, the `traffic-steering` feature flag does
not automatically enable traffic steering for all nodes. Instead, an
admin must add the `traffic-steering` node attribute to each client
node that they want opted-in.

For backwards compatibility with older clients, tailscale/corp#34401
strips out the `traffic-steering` node attribute if the feature flag
is not enabled, even if it is set in the policy file. This lets us
safely disable the feature flag.

This PR adds a missing test case for suggested exit nodes that have no
priority.

Updates tailscale/corp#34399

Signed-off-by: Simon Law <sfllaw@tailscale.com>
1 week ago
Nick Khyl 7073f246d3 ipn/ipnlocal: do not call controlclient.Client.Shutdown with b.mu held
This fixes a regression in #17804 that caused a deadlock.

Updates #18052

Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 week ago
David Bond d4821cdc2f
cmd/k8s-operator: allow HA ingresses to be deleted when VIP service does not exist (#18050)
This commit fixes a bug in our HA ingress reconciler where ingress resources would
be stuck in a deleting state should their associated VIP service be deleted within
control.

The reconciliation loop would check for the existence of the VIP service and if not
found perform no additional cleanup steps. The code has been modified to continue
onwards even if the VIP service is not found.

Fixes: https://github.com/tailscale/tailscale/issues/18049

Signed-off-by: David Bond <davidsbond93@gmail.com>
1 week ago
Simon Law 9c3a2aa797
ipn/ipnlocal: replace log.Printf with logf (#18045)
Updates #cleanup

Signed-off-by: Simon Law <sfllaw@tailscale.com>
1 week ago
Jordan Whited 7426eca163 cmd/tailscale,feature/relayserver,ipn: add relay-server-static-endpoints set flag
Updates tailscale/corp#31489
Updates #17791

Signed-off-by: Jordan Whited <jordan@tailscale.com>
1 week ago
Jordan Whited 755309c04e net/udprelay: use blake2s-256 MAC for handshake challenge
This commit replaces crypto/rand challenge generation with a blake2s-256
MAC. This enables the peer relay server to respond to multiple forward
disco.BindUDPRelayEndpoint messages per handshake generation without
sacrificing the proof of IP ownership properties of the handshake.

Responding to multiple forward disco.BindUDPRelayEndpoint messages per
handshake generation improves client address/path selection where
lowest client->server path/addr one-way delay does not necessarily
equate to lowest client<->server round trip delay.

It also improves situations where outbound traffic is filtered
independent of input, and the first reply
disco.BindUDPRelayEndpointChallenge message is dropped on the reply
path, but a later reply using a different source would make it through.

Reduction in serverEndpoint state saves 112 bytes per instance, trading
for slightly more expensive crypto ops: 277ns/op vs 321ns/op on an M1
Macbook Pro.

Updates tailscale/corp#34414

Signed-off-by: Jordan Whited <jordan@tailscale.com>
1 week ago
Tom Proctor 6637003cc8 cmd/cigocacher,go.mod: add cigocacher cmd
Adds cmd/cigocacher as the client to cigocached for Go caching over
HTTP. The HTTP cache is best-effort only, and builds will fall back to
disk-only cache if it's not available, much like regular builds.

Not yet used in CI; that will follow in another PR once we have runners
available in this repo with the right network setup for reaching
cigocached.

Updates tailscale/corp#10808

Change-Id: I13ae1a12450eb2a05bd9843f358474243989e967
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 week ago
Andrew Dunham 698eecda04 ipn/ipnlocal: fix panic in driveTransport on network error
When the underlying transport returns a network error, the RoundTrip
method returns (nil, error). The defer was attempting to access resp
without checking if it was nil first, causing a panic. Fix this by
checking for nil in the defer.

Also changes driveTransport.tr from *http.Transport to http.RoundTripper
and adds a test.

Fixes #17306

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
Change-Id: Icf38a020b45aaa9cfbc1415d55fd8b70b978f54c
1 week ago
Andrew Dunham a20cdb5c93 tstest/integration/testcontrol: de-flake TestUserMetricsRouteGauges
SetSubnetRoutes was not sending update notifications to nodes when their
approved routes changed, causing nodes to not fetch updated netmaps with
PrimaryRoutes populated. This resulted in TestUserMetricsRouteGauges
flaking because it waited for PrimaryRoutes to be set, which only happened
if the node happened to poll for other reasons.

Now send updateSelfChanged notification to affected nodes so they fetch
an updated netmap immediately.

Fixes #17962

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
1 week ago
Andrew Dunham 16587746ed portlist,tstest: skip tests on kernels with /proc/net/tcp regression
Linux kernel versions 6.6.102-104 and 6.12.42-45 have a regression
in /proc/net/tcp that causes seek operations to fail with "illegal seek".
This breaks portlist tests on these kernels.

Add kernel version detection for Linux systems and a SkipOnKernelVersions
helper to tstest. Use it to skip affected portlist tests on the broken
kernel versions.

Thanks to philiptaron for the list of kernels with the issue and fix.

Updates #16966

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
2 weeks ago
Nick Khyl 1ccece0f78 util/eventbus: use unbounded event queues for DeliveredEvents in subscribers
Bounded DeliveredEvent queues reduce memory usage, but they can deadlock under load.
Two common scenarios trigger deadlocks when the number of events published in a short
period exceeds twice the queue capacity (there's a PublishedEvent queue of the same size):
 - a subscriber tries to acquire the same mutex as held by a publisher, or
 - a subscriber for A events publishes B events

Avoiding these scenarios is not practical and would limit eventbus usefulness and reduce its adoption,
pushing us back to callbacks and other legacy mechanisms. These deadlocks already occurred in customer
devices, dev machines, and tests. They also make it harder to identify and fix slow subscribers and similar
issues we have been seeing recently.

Choosing an arbitrary large fixed queue capacity would only mask the problem. A client running
on a sufficiently large and complex customer environment can exceed any meaningful constant limit,
since event volume depends on the number of peers and other factors. Behavior also changes
based on scheduling of publishers and subscribers by the Go runtime, OS, and hardware, as the issue
is essentially a race between publishers and subscribers. Additionally, on lower-end devices,
an unreasonably high constant capacity is practically the same as using unbounded queues.

Therefore, this PR changes the event queue implementation to be unbounded by default.
The PublishedEvent queue keeps its existing capacity of 16 items, while subscribers'
DeliveredEvent queues become unbounded.

This change fixes known deadlocks and makes the system stable under load,
at the cost of higher potential memory usage, including cases where a queue grows
during an event burst and does not shrink when load decreases.

Further improvements can be implemented in the future as needed.

Fixes #17973
Fixes #18012

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 weeks ago
Jordan Whited 9245c7131b feature/relayserver: don't publish from within a subscribe fn goroutine
Updates #17830

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2 weeks ago
Claus Lensbøl e7f5ca1d5e
wgengine/userspace: run link change subscribers in eventqueue (#18024)
Updates #17996

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
2 weeks ago
Nick Khyl 3780f25d51 util/eventbus: add tests for a subscriber publishing events
As of 2025-11-20, publishing more events than the eventbus's
internal queues can hold may deadlock if a subscriber tries
to publish events itself.

This commit adds a test that demonstrates this deadlock,
and skips it until the bug is fixed.

Updates #18012

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 weeks ago
Nick Khyl 016ccae2da util/eventbus: add tests for a subscriber trying to acquire the same mutex as a publisher
As of 2025-11-20, publishing more events than the eventbus's
internal queues can hold may deadlock if a subscriber tries
to acquire a mutex that can also be held by a publisher.

This commit adds a test that demonstrates this deadlock,
and skips it until the bug is fixed.

Updates #17973

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 weeks ago
Alex Chan ce95bc77fb tka: don't panic if no clock set in tka.Mem
This is causing confusing panics in tailscale/corp#34485. We'll keep
using the tka.ChonkMem constructor as much as we can, but don't panic
if you create a tka.Mem directly -- we know what the sensible thing is.

Updates #cleanup

Signed-off-by: Alex Chan <alexc@tailscale.com>

Change-Id: I49309f5f403fc26ce4f9a6cf0edc8eddf6a6f3a4
2 weeks ago
Andrew Lytvynov c679aaba32
cmd/tailscaled,ipn: show a health warning when state store fails to open (#17883)
With the introduction of node sealing, store.New fails in some cases due
to the TPM device being reset or unavailable. Currently it results in
tailscaled crashing at startup, which is not obvious to the user until
they check the logs.

Instead of crashing tailscaled at startup, start with an in-memory store
with a health warning about state initialization and a link to (future)
docs on what to do. When this health message is set, also block any
login attempts to avoid masking the problem with an ephemeral node
registration.

Updates #15830
Updates #17654

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Andrew Lytvynov de8ed203e0
go.mod: bump golang.org/x/crypto (#18011)
Pick up fixes for https://pkg.go.dev/vuln/GO-2025-4134

Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Harry Harpham ac74d28190
ipn/ipnlocal: add validations when setting serve config (#17950)
These validations were previously performed in the CLI frontend. There
are two motivations for moving these to the local backend:
1. The backend controls synchronization around the relevant state, so
   only the backend can guarantee many of these validations.
2. Doing these validations in the back-end avoids the need to repeat
   them across every frontend (e.g. the CLI and tsnet).

Updates tailscale/corp#27200

Signed-off-by: Harry Harpham <harry@tailscale.com>
2 weeks ago
David Bond 42a5262016
cmd/k8s-operator: add multi replica support for recorders (#17864)
This commit adds the `spec.replicas` field to the `Recorder` custom
resource that allows for a highly available deployment of `tsrecorder`
within a kubernetes cluster.

Many changes were required here as the code hard-coded the assumption
of a single replica. This has required a few loops, similar to what we
do for the `Connector` resource to create auth and state secrets. It
was also required to add a check to remove dangling state and auth
secrets should the recorder be scaled down.

Updates: https://github.com/tailscale/tailscale/issues/17965

Signed-off-by: David Bond <davidsbond93@gmail.com>
2 weeks ago
Jonathan Nobels 682172ca2d net/netns: remove spammy logs for interface binding caps
fixes tailscale/tailscale#17990

The logging for the netns caps is spammy.  Log only on changes
to the values and don't log Darwin specific stuff on non Darwin
clients.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2 weeks ago
Brad Fitzpatrick 7d19813618 net/batching: fix import formatting
From #17842

Updates #cleanup

Change-Id: Ie041b50659361b50558d5ec1f557688d09935f7c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
David Bond 86a849860e
cmd/k8s-operator: use stable image for k8s-nameserver (#17985)
This commit modifies the kubernetes operator to use the "stable" version
of `k8s-nameserver` by default.

Updates: https://github.com/tailscale/corp/issues/19028

Signed-off-by: David Bond <davidsbond93@gmail.com>
2 weeks ago
KevinLiang10 a0d059d74c
cmd/tailscale/cli: allow remote target as service destination (#17607)
This commit enables user to set service backend to remote destinations, that can be a partial
URL or a full URL. The commit also prevents user to set remote destinations on linux system
when socket mark is not working. For user on any version of mac extension they can't serve a
service either. The socket mark usability is determined by a new local api.

Fixes tailscale/corp#24783

Signed-off-by: KevinLiang10 <37811973+KevinLiang10@users.noreply.github.com>
2 weeks ago
License Updater 12c598de28 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2 weeks ago
Alex Chan 976bf24f5e ipn/ipnlocal: remove the always-true CanSupportNetworkLock()
Now that we support using an in-memory backend for TKA state (#17946),
this function always returns `nil` – we can always support Network Lock.
We don't need it any more.

Plus, clean up a couple of errant TODOs from that PR.

Updates tailscale/corp#33599

Change-Id: Ief93bb9adebb82b9ad1b3e406d1ae9d2fa234877
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Brad Fitzpatrick 6ac4356bce util/eventbus: simplify some reflect in Bus.pump
Updates #cleanup

Change-Id: Ib7b497e22c6cdd80578c69cf728d45754e6f909e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Alex Chan 336df56f85 cmd/tailscale/cli: remove Latin abbreviations from CLI help text
Our style guide recommends avoiding Latin abbreviations in technical
documentation, which includes the CLI help text. This is causing linter
issues for the docs site, because this help text is copied into the docs.
See http://go/style-guide/kb/language-and-grammar/abbreviations#latin-abbreviations

Updates #cleanup

Change-Id: I980c28d996466f0503aaaa65127685f4af608039
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Alex Chan aeda3e8183 ipn/ipnlocal: reduce profileManager boilerplate in network-lock tests
Updates tailscale/corp#33537

Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Raj Singh 62d64c05e1
cmd/k8s-operator: fix type comparison in apiserver proxy template (#17981)
ArgoCD sends boolean values but the template expects strings, causing
"incompatible types for comparison" errors. Wrap values with toString
so both work.

Fixes #17158

Signed-off-by: Raj Singh <raj@tailscale.com>
2 weeks ago
Alex Chan e1dd9222d4 ipn/ipnlocal, tka: compact TKA state after every sync
Previously a TKA compaction would only run when a node starts, which means a long-running node could use unbounded storage as it accumulates ever-increasing amounts of TKA state. This patch changes TKA so it runs a compaction after every sync.

Updates https://github.com/tailscale/corp/issues/33537

Change-Id: I91df887ea0c5a5b00cb6caced85aeffa2a4b24ee
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
David Bond 38ccdbe35c
cmd/k8s-operator: default to stable image (#17848)
This commit modifies the helm/static manifest configuration for the
k8s-operator to prefer the stable image tag. This avoids making those
using static manifests seeing unstable behaviour by default if they
do not manually make the change.

This is managed for us when using helm but not when generating the
static manifests.

Updates https://github.com/tailscale/tailscale/issues/10655

Signed-off-by: David Bond <davidsbond93@gmail.com>
2 weeks ago
Brad Fitzpatrick 408336a089 feature/featuretags: add CacheNetMap feature tag for upcoming work
(trying to get in smaller obvious chunks ahead of later PRs to make
them smaller)

Updates #17925

Change-Id: I184002001055790484e4792af8ffe2a9a2465b2e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 5b0c57f497 tailcfg: add some omitzero, adjust some omitempty to omitzero
Updates tailscale/corp#25406

Change-Id: I7832dbe3dce3774bcc831e3111feb75bcc9e021d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Joe Tsai 3b865d7c33
cmd/netlogfmt: support resolving IP addresses to synonymous labels (#17955)
We now embed node information into network flow logs.
By default, netlogfmt still prints out using Tailscale IP addresses.
Support a "--resolve-addrs=TYPE" flag that can be used to specify
resolving IP addresses as node IDs, hostnames, users, or tags.

Updates tailscale/corp#33352

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 weeks ago
James Tucker c09c95ef67 types/key,wgengine/magicsock,control/controlclient,ipn: add debug disco key rotation
Adds the ability to rotate discovery keys on running clients, needed for
testing upcoming disco key distribution changes.

Introduces key.DiscoKey, an atomic container for a disco private key,
public key, and the public key's ShortString, replacing the prior
separate atomic fields.

magicsock.Conn has a new RotateDiscoKey method, and access to this is
provided via localapi and a CLI debug command.

Note that this implementation is primarily for testing as it stands, and
regular use should likely introduce an additional mechanism that allows
the old key to be used for some time, to provide a seamless key rotation
rather than one that invalidates all sessions.

Updates tailscale/corp#34037

Signed-off-by: James Tucker <james@tailscale.com>
2 weeks ago
Fran Bull da508c504d appc: add ippool type
As part of the conn25 work we will want to be able to keep track of a
pool of IP Addresses and know which have been used and which have not.

Fixes tailscale/corp#34247

Signed-off-by: Fran Bull <fran@tailscale.com>
2 weeks ago
Alex Chan d0daa5a398 tka: marshal AUMHash totext even if Tailnet Lock is omitted
We use `tka.AUMHash` in `netmap.NetworkMap`, and we serialise it as JSON
in the `/debug/netmap` C2N endpoint. If the binary omits Tailnet Lock support,
the debug endpoint returns an error because it's unable to marshal the
AUMHash.

This patch adds a sentinel value so this marshalling works, and we can
use the debug endpoint.

Updates https://github.com/tailscale/tailscale/issues/17115

Signed-off-by: Alex Chan <alexc@tailscale.com>

Change-Id: I51ec1491a74e9b9f49d1766abd89681049e09ce4
2 weeks ago
Anton Tolchanov 04a9d25a54 tka: mark young AUMs as active even if the chain is long
Existing compaction logic seems to have had an assumption that
markActiveChain would cover a longer part of the chain than
markYoungAUMs. This prevented long, but fresh, chains, from being
compacted correctly.

Updates tailscale/corp#33537

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 weeks ago
Brad Fitzpatrick bd29b189fe types/netmap,*: remove some redundant fields from NetMap
Updates #12639

Change-Id: Ia50b15529bd1c002cdd2c937cdfbe69c06fa2dc8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 2a6cbb70d9 .github/workflows: make go_generate check detect new files
Updates #17957

Change-Id: I904fd5b544ac3090b58c678c4726e7ace41a52dd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 4e2f2d1088 feature/buildfeatures: re-run go generate
6a73c0bdf5 added a feature tag but didn't re-run go generate on ./feature/buildfeatures.

Updates #9192

Change-Id: I7819450453e6b34c60cad29d2273e3e118291643
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Alex Chan af7c26aa05 cmd/vet/jsontags: fix a typo in an error message
Updates #17945

Change-Id: I8987271420feb190f5e4d85caff305c8d4e84aae
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Alex Chan 85373ef822 tka: move RemoveAll() to CompactableChonk
I added a RemoveAll() method on tka.Chonk in #17946, but it's only used
in the node to purge local AUMs. We don't need it in the SQLite storage,
which currently implements tka.Chonk, so move it to CompactableChonk
instead.

Also add some automated tests, as a safety net.

Updates tailscale/corp#33599

Change-Id: I54de9ccf1d6a3d29b36a94eccb0ebd235acd4ebc
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Alex Chan c2e474e729 all: rename variables with lowercase-l/uppercase-I
See http://go/no-ell

Signed-off-by: Alex Chan <alexc@tailscale.com>

Updates #cleanup

Change-Id: I8c976b51ce7a60f06315048b1920516129cc1d5d
2 weeks ago
James 'zofrex' Sanderson 9048ea25db
ipn/localapi: log calls to localapi (#17880)
Updates tailscale/corp#34238

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2 weeks ago
James 'zofrex' Sanderson a2e9dfacde
cmd/tailscale/cli: warn if a simple up would change prefs (#17877)
Updates tailscale/corp#21570

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2 weeks ago
Joe Tsai 4860c460f5
wgengine/netlog: strip dot suffix from node name (#17954)
The REST API does not return a node name
with a trailing dot, while the internal node name
reported in the netmap does have one.

In order to be consistent with the API,
strip the dot when recording node information.

Updates tailscale/corp#33352

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 weeks ago
James Tucker 41662f5128 ssh/tailssh: fix incubator tests on macOS arm64
Perform a path check first before attempting exec of `true`.

Try /usr/bin/true first, as that is now and increasingly so, the more
common and more portable path.

Fixes tests on macOS arm64 where exec was returning a different kind of
path error than previously checked.

Updates #16569

Signed-off-by: James Tucker <james@tailscale.com>
2 weeks ago
Andrew Lytvynov 26f9b50247
feature/tpm: disable dictionary attack protection on sealing key (#17952)
DA protection is not super helpful because we don't set an authorization
password on the key. But if authorization fails for other reasons (like
TPM being reset), we will eventually cause DA lockout with tailscaled
trying to load the key. DA lockout then leads to (1) issues for other
processes using the TPM and (2) the underlying authorization error being
masked in logs.

Updates #17654

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Brad Fitzpatrick f1cddc6ecf ipn{,/local},cmd/tailscale: add "sync" flag and pref to disable control map poll
For manual (human) testing, this lets the user disable control plane
map polls with "tailscale set --sync=false" (which survives restarts)
and "tailscale set --sync" to restore.

A high severity health warning is shown while this is active.

Updates #12639
Updates #17945

Change-Id: I83668fa5de3b5e5e25444df0815ec2a859153a6d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 165a24744e tka: fix typo in comment
Let's fix all the typos, which lets the code be more readable, lest we
confuse our readers.

Updates #cleanup

Change-Id: I4954601b0592b1fda40269009647bb517a4457be
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Alex Chan 1723cb83ed ipn/ipnlocal: use an in-memory TKA store if FS is unavailable
This requires making the internals of LocalBackend a bit more generic,
and implementing the `tka.CompactableChonk` interface for `tka.Mem`.

Signed-off-by: Alex Chan <alexc@tailscale.com>

Updates https://github.com/tailscale/corp/issues/33599
2 weeks ago
Andrew Lytvynov d01081683c
go.mod: bump golang.org/x/crypto (#17907)
Pick up a fix for https://pkg.go.dev/vuln/GO-2025-4116 (even though
we're not affected).

Updates #cleanup

Change-Id: I9f2571b17c1f14db58ece8a5a34785805217d9dd

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 weeks ago
Alex Chan 200383dce5 various: add more missing apostrophes in comments
Updates #cleanup

Change-Id: I79a0fda9783064a226ee9bcee2c1148212f6df7b
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 weeks ago
Brad Fitzpatrick 1e95bfa184 ipn: fix typo in comment
Updates #cleanup

Change-Id: Iec66518abd656c64943a58eb6d92f342e627a613
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick a5b2f18567 control/controlclient: remove some public API, move to Options & test-only
Includes adding StartPaused, which will be used in a future change to
enable netmap caching testing.

Updates #12639

Change-Id: Iec39915d33b8d75e9b8315b281b1af2f5d13a44a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Alex Chan 139c395d7d cmd/tailscale/cli: stabilise the output of `tailscale lock log --json`
This patch changes the behaviour of `tailscale lock log --json` to make
it more useful for users. It also introduces versioning of our JSON output.

## Changes to `tailscale lock log --json`

Previously this command would print the hash and base64-encoded bytes of
each AUM, and users would need their own CBOR decoder to interpret it in
a useful way:

```json
[
  {
    "Hash": [
      80,
      136,
      151,
      …
    ],
    "Change": "checkpoint",
    "Raw": "pAEFAvYFpQH2AopYIAkPN+8V3cJpkoC5ZY2+RI2Bcg2q5G7tRAQQd67W3YpnWCDPOo4KGeQBd8hdGsjoEQpSXyiPdlm+NXAlJ5dS1qEbFlggylNJDQM5ZQ2ULNsXxg2ZBFkPl/D93I1M56/rowU+UIlYIPZ/SxT9EA2Idy9kaCbsFzjX/s3Ms7584wWGbWd/f/QAWCBHYZzYiAPpQ+NXN+1Wn2fopQYk4yl7kNQcMXUKNAdt1lggcfjcuVACOH0J9pRNvYZQFOkbiBmLOW1hPKJsbC1D1GdYIKrJ38XMgpVMuTuBxM4YwoLmrK/RgXQw1uVEL3cywl3QWCA0FilVVv8uys8BNhS62cfNvCew1Pw5wIgSe3Prv8d8pFggQrwIt6ldYtyFPQcC5V18qrCnt7VpThACaz5RYzpx7RNYIKskOA7UoNiVtMkOrV2QoXv6EvDpbO26a01lVeh8UCeEA4KjAQECAQNYIORIdNHqSOzz1trIygnP5w3JWK2DtlY5NDIBbD7SKcjWowEBAgEDWCD27LpxiZNiA19k0QZhOWmJRvBdK2mz+dHu7rf0iGTPFwQb69Gt42fKNn0FGwRUiav/k6dDF4GiAVgg5Eh00epI7PPW2sjKCc/nDclYrYO2Vjk0MgFsPtIpyNYCWEDzIAooc+m45ay5PB/OB4AA9Fdki4KJq9Ll+PF6IJHYlOVhpTbc3E0KF7ODu1WURd0f7PXnW72dr89CSfGxIHAF"
  }
]
```

Now we print the AUM in an expanded form that can be easily read by scripts,
although we include the raw bytes for verification and auditing.

```json
{
  "SchemaVersion": "1",
  "Messages": [
    {
      "Hash": "KCEJPRKNSXJG2TPH3EHQRLJNLIIK2DV53FUNPADWA7BZJWBDRXZQ",
      "AUM": {
        "MessageKind": "checkpoint",
        "PrevAUMHash": null,
        "Key": null,
        "KeyID": null,
        "State": {
          …
        },
        "Votes": null,
        "Meta": null,
        "Signatures": [
          {
            "KeyID": "tlpub:e44874d1ea48ecf3d6dac8ca09cfe70dc958ad83b656393432016c3ed229c8d6",
            "Signature": "8yAKKHPpuOWsuTwfzgeAAPRXZIuCiavS5fjxeiCR2JTlYaU23NxNChezg7tVlEXdH+z151u9na/PQknxsSBwBQ=="
          }
        ]
      },
      "Raw": "pAEFAvYFpQH2AopYIAkPN-8V3cJpkoC5ZY2-RI2Bcg2q5G7tRAQQd67W3YpnWCDPOo4KGeQBd8hdGsjoEQpSXyiPdlm-NXAlJ5dS1qEbFlggylNJDQM5ZQ2ULNsXxg2ZBFkPl_D93I1M56_rowU-UIlYIPZ_SxT9EA2Idy9kaCbsFzjX_s3Ms7584wWGbWd_f_QAWCBHYZzYiAPpQ-NXN-1Wn2fopQYk4yl7kNQcMXUKNAdt1lggcfjcuVACOH0J9pRNvYZQFOkbiBmLOW1hPKJsbC1D1GdYIKrJ38XMgpVMuTuBxM4YwoLmrK_RgXQw1uVEL3cywl3QWCA0FilVVv8uys8BNhS62cfNvCew1Pw5wIgSe3Prv8d8pFggQrwIt6ldYtyFPQcC5V18qrCnt7VpThACaz5RYzpx7RNYIKskOA7UoNiVtMkOrV2QoXv6EvDpbO26a01lVeh8UCeEA4KjAQECAQNYIORIdNHqSOzz1trIygnP5w3JWK2DtlY5NDIBbD7SKcjWowEBAgEDWCD27LpxiZNiA19k0QZhOWmJRvBdK2mz-dHu7rf0iGTPFwQb69Gt42fKNn0FGwRUiav_k6dDF4GiAVgg5Eh00epI7PPW2sjKCc_nDclYrYO2Vjk0MgFsPtIpyNYCWEDzIAooc-m45ay5PB_OB4AA9Fdki4KJq9Ll-PF6IJHYlOVhpTbc3E0KF7ODu1WURd0f7PXnW72dr89CSfGxIHAF"
    }
  ]
}
```

This output was previously marked as unstable, and it wasn't very useful,
so changing it should be fine.

## Versioning our JSON output

This patch introduces a way to version our JSON output on the CLI, so we
can make backwards-incompatible changes in future without breaking existing
scripts or integrations.

You can run this command in two ways:

```
tailscale lock log --json
tailscale lock log --json=1
```

Passing an explicit version number allows you to pick a specific JSON schema.
If we ever want to change the schema, we increment the version number and
users must opt-in to the new output.

A bare `--json` flag will always return schema version 1, for compatibility
with existing scripts.

Updates https://github.com/tailscale/tailscale/issues/17613
Updates https://github.com/tailscale/corp/issues/23258

Signed-off-by: Alex Chan <alexc@tailscale.com>

Change-Id: I897f78521cc1a81651f5476228c0882d7b723606
2 weeks ago
Brad Fitzpatrick 99b06eac49 syncs: add Mutex/RWMutex alias/wrappers for future mutex debugging
Updates #17852

Change-Id: I477340fb8e40686870e981ade11cd61597c34a20
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Andrew Dunham 3a41c0c585 ipn/ipnlocal: add PROXY protocol support to Funnel/Serve
This adds the --proxy-protocol flag to 'tailscale serve' and
'tailscale funnel', which tells the Tailscale client to prepend a PROXY
protocol[1] header when making connections to the proxied-to backend.

I've verified that this works with our existing funnel servers without
additional work, since they pass along source address information via
PeerAPI already.

Updates #7747

[1]: https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

Change-Id: I647c24d319375c1b33e995555a541b7615d2d203
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
2 weeks ago
Brad Fitzpatrick 653d0738f9 types/netmap: remove PrivateKey from NetworkMap
It's an unnecessary nuisance having it. We go out of our way to redact
it in so many places when we don't even need it there anyway.

Updates #12639

Change-Id: I5fc72e19e9cf36caeb42cf80ba430873f67167c3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 weeks ago
Brad Fitzpatrick 98aadbaf54 util/cache: remove unused code
Updates #cleanup

Change-Id: I9be7029c5d2a7d6297125d0147e93205a7c68989
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4e01e8a66e wgengine/netlog: fix send to closed channel in test
Fixes #17922

Change-Id: I2cd600b0ecda389079f2004985ac9a25ffbbfdd1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Avery Palmer 8aa46a3956 util/clientmetric: fix regression causing Metric.v to be uninitialised
m.v was uninitialised when Tailscale built with ts_omit_logtail
Fixes #17918

Signed-off-by: Avery Palmer <quagsirus@catpowered.net>
3 weeks ago
Xinyu Kuo 8444659ed8 cmd/tailscale/cli: fix panic in netcheck with mismatched DERP region IDs
Fixes #17564

Signed-off-by: Xinyu Kuo <gxylong@126.com>
3 weeks ago
Jordan Whited e1f0ad7a05
net/udprelay: implement Server.SetStaticAddrPorts (#17909)
Only used in tests for now.

Updates tailscale/corp#31489

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
James Tucker a96ef432cf control/controlclient,ipn/ipnlocal: replace State enum with boolean flags
Remove the State enum (StateNew, StateNotAuthenticated, etc.) from
controlclient and replace it with two explicit boolean fields:
- LoginFinished: indicates successful authentication
- Synced: indicates we've received at least one netmap

This makes the state more composable and easier to reason about, as
multiple conditions can be true independently rather than being
encoded in a single enum value.

The State enum was originally intended as the state machine for the
whole client, but that abstraction moved to ipn.Backend long ago.
This change continues moving away from the legacy state machine by
representing state as a combination of independent facts.

Also adds test helpers in ipnlocal that check independent, observable
facts (hasValidNetMap, needsLogin, etc.) rather than relying on
derived state enums, making tests more robust.

Updates #12639

Signed-off-by: James Tucker <james@tailscale.com>
3 weeks ago
Andrew Lytvynov c5919b4ed1
feature/tpm: check IsZero in clone instead of just nil (#17884)
The key.NewEmptyHardwareAttestationKey hook returns a non-nil empty
attestationKey, which means that the nil check in Clone doesn't trigger
and proceeds to try and clone an empty key. Check IsZero instead to
reduce log spam from Clone.

As a drive-by, make tpmAvailable check a sync.Once because the result
won't change.

Updates #17882

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Andrew Lytvynov 888a5d4812
ipn/localapi: use constant-time comparison for RequiredPassword (#17906)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Alex Chan 9134440008 various: adds missing apostrophes to comments
Updates #cleanup

Change-Id: I7bf29cc153c3c04e087f9bdb146c3437bed0129a
Signed-off-by: Alex Chan <alexc@tailscale.com>
3 weeks ago
Simon Law bd36817e84
scripts/installer.sh: compare major versions numerically (#17904)
Most /etc/os-release files set the VERSION_ID to a `MAJOR.MINOR`
string, but we were trying to compare this numerically against a major
version number. I can only assume that Linux Mint used switched from a
plain integer, since shells only do integer comparisons.

This patch extracts a VERSION_MAJOR from the VERSION_ID using
parameter expansion and unifies all the other ad-hoc comparisons to
use it.

Fixes #15841

Signed-off-by: Simon Law <sfllaw@tailscale.com>
Co-authored-by: Xavier <xhienne@users.noreply.github.com>
3 weeks ago
M. J. Fromberger ab4b990d51
net/netmon: do not abandon a subscriber when exiting early (#17899)
LinkChangeLogLimiter keeps a subscription to track rate limits for log
messages.  But when its context ended, it would exit the subscription loop,
leaving the subscriber still alive. Ensure the subscriber gets cleaned up
when the context ends, so we don't stall event processing.

Updates tailscale/corp#34311

Change-Id: I82749e482e9a00dfc47f04afbc69dd0237537cb2
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
3 weeks ago
Brad Fitzpatrick ce10f7c14c wgengine/wgcfg/nmcfg: reduce wireguard reconfig log spam
On the corp tailnet (using Mullvad exit nodes + bunch of expired
devices + subnet routers), these were generating big ~35 KB blobs of
logging regularly.

This logging shouldn't even exist at this level, and should be rate
limited at a higher level, but for now as a bandaid, make it less
spammy.

Updates #cleanup

Change-Id: I0b5e9e6e859f13df5f982cd71cd5af85b73f0c0a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Andrew Dunham 208a32af5b logpolicy: fix nil pointer dereference with invalid TS_LOG_TARGET
When TS_LOG_TARGET is set to an invalid URL, url.Parse returns an error
and nil pointer, which caused a panic when accessing u.Host.

Now we check the error from url.Parse and log a helpful message while
falling back to the default log host.

Fixes #17792

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
3 weeks ago
Brad Fitzpatrick 052602752f control/controlclient: make Observer optional
As a baby step towards eventbus-ifying controlclient, make the
Observer optional.

This also means callers that don't care (like this network lock test,
and some tests in other repos) can omit it, rather than passing in a
no-op one.

Updates #12639

Change-Id: Ibd776b45b4425c08db19405bc3172b238e87da4e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Jordan Whited 0285e1d5fb
feature/relayserver: fix Shutdown() deadlock (#17898)
Updates #17894

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
James 'zofrex' Sanderson 124301fbb6
ipn/ipnlocal: log prefs changes and reason in Start (#17876)
Updates tailscale/corp#34238

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
3 weeks ago
Alex Chan b5cd29932e tka: add a test for unmarshaling existing AUMs
Updates https://github.com/tailscale/tailscale/issues/17613

Change-Id: I693a580949eef59263353af6e7e03a7af9bbaa0b
Signed-off-by: Alex Chan <alexc@tailscale.com>
3 weeks ago
Jordan Whited 9e4d1fd87f
feature/relayserver,ipn/ipnlocal,net/udprelay: plumb DERPMap (#17881)
This commit replaces usage of local.Client in net/udprelay with DERPMap
plumbing over the eventbus. This has been a longstanding TODO. This work
was also accelerated by a memory leak in net/http when using
local.Client over long periods of time. So, this commit also addresses
said leak.

Updates #17801

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
Brad Fitzpatrick 146ea42822 ipn/ipnlocal: remove all the weird locking (LockedOnEntry, UnlockEarly, etc)
Fixes #11649
Updates #16369

Co-authored-by: James Sanderson <jsanderson@tailscale.com>
Change-Id: I63eaa18fe870ddf81d84b949efac4d1b44c3db86
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Andrew Dunham 08e74effc0 cmd/cloner: support cloning arbitrarily-nested maps
Fixes #17870

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
3 weeks ago
Naman Sood ca9b68aafd
cmd/tailscale/cli: remove service flag from funnel command (#17850)
Fixes #17849.

Signed-off-by: Naman Sood <mail@nsood.in>
3 weeks ago
Andrew Dunham 6ac80b7334 cmd/{cloner,viewer}: handle maps of views
Instead of trying to call View() on something that's already a View
type (or trying to Clone the view unnecessarily), we can re-use the
existing View values in a map[T]ViewType.

Fixes #17866

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
3 weeks ago
Jordan Whited f4f9dd7f8c
net/udprelay: replace VNI pool with selection algorithm (#17868)
This reduces memory usage when tailscaled is acting as a peer relay.

Updates #17801

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
License Updater 31fe75ad9e licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
3 weeks ago
Fran Bull 37aa7e6935 util/dnsname: fix test error message
Updates #17788

Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Brad Fitzpatrick f387b1010e wgengine/wgcfg: remove two unused Config fields
They distracted me in some refactoring. They're set but never used.

Updates #17858

Change-Id: I6ec7d6841ab684a55bccca7b7cbf7da9c782694f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Fran Bull 27a0168cdc util/dnsname: increase maxNameLength to account for trailing dot
Fixes #17788

Signed-off-by: Fran Bull <fran@tailscale.com>
3 weeks ago
Jonathan Nobels e8d2f96449
ipn/ipnlocal, net/netns: add node cap to disable netns interface binding on netext Apple clients (#17691)
updates tailscale/corp#31571

It appears that on the latest macOS, iOS and tVOS versions, the work
that netns is doing to bind outgoing connections to the default interface (and all
of the trimmings and workarounds in netmon et al that make that work) are
not needed. The kernel is extension-aware and doing nothing, is the right
thing.  This is, however, not the case for tailscaled (which is not a
special process).

To allow us to test this assertion (and where it might break things), we add a
new node cap that turns this behaviour off only for network-extension equipped clients,
making it possible to turn this off tailnet-wide, without breaking any tailscaled
macos nodes.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
3 weeks ago
Sachin Iyer 16e90dcb27
net/batching: fix gro size handling for misordered UDP_GRO messages (#17842)
Fixes #17835

Signed-off-by: Sachin Iyer <siyer@detail.dev>
3 weeks ago
Sachin Iyer d37884c734
cmd/k8s-operator: remove early return in ingress matching (#17841)
Fixes #17834

Signed-off-by: Sachin Iyer <siyer@detail.dev>
3 weeks ago
Sachin Iyer 85cb64c4ff
wf: correct IPv6 link-local range from ff80::/10 to fe80::/10 (#17840)
Fixes #17833

Signed-off-by: Sachin Iyer <siyer@detail.dev>
3 weeks ago
Sachin Iyer 3280dac797 wgengine/router/osrouter: fix linux magicsock port changing
Fixes #17837

Signed-off-by: Sachin Iyer <siyer@detail.dev>
3 weeks ago
Brad Fitzpatrick 1eba5b0cbd util/eventbus: log goroutine stacks when hung in CI
Updates #17680

Change-Id: Ie48dc2d64b7583d68578a28af52f6926f903ca4f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 42ce5c88be wgengine/magicsock: unblock Conn.Synchronize on Conn.Close
I noticed a deadlock in a test in a in-development PR where during a
shutdown storm of things (from a tsnet.Server.Close), LocalBackend was
trying to call magicsock.Conn.Synchronize but the magicsock and/or
eventbus was already shut down and no longer processing events.

Updates #16369

Change-Id: I58b1f86c8959303c3fb46e2e3b7f38f6385036f1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Jordan Whited 2ad2d4d409
wgengine/magicsock: fix UDPRelayAllocReq/Resp deadlock (#17831)
Updates #17830

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
Jordan Whited 18806de400
wgengine/magicsock: validate endpoint.derpAddr in Conn.onUDPRelayAllocResp (#17828)
Otherwise a zero value will panic in Conn.sendUDPStd.

Updates #17827

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
Brad Fitzpatrick 4650061326 ipn/ipnlocal: fix state_test data race seen in CI
Unfortunately I closed the tab and lost it in my sea of CI failures
I'm currently fighting.

Updates #cleanup

Change-Id: I4e3a652d57d52b75238f25d104fc1987add64191
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 6e24f50946 tsnet: add tstest.Shard on the slow tests
So they're not all run N times on the sharded oss builders
and are only run one time each.

Updates tailscale/corp#28679

Change-Id: Ie21e84b06731fdc8ec3212eceb136c8fc26b0115
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 8ed6bb3198 ipn/ipnlocal: move vipServiceHash etc to serve.go, out of local.go
Updates #12614

Change-Id: I3c16b94fcb997088ff18d5a21355e0279845ed7e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick e0e8731130 feature, ipn/ipnlocal: add, use feature.CanSystemdStatus for more DCE
When systemd notification support was omitted from the build, or on
non-Linux systems, we were unnecessarily emitting code and generating
garbage stringifying addresses upon transition to the Running state.

Updates #12614

Change-Id: If713f47351c7922bb70e9da85bf92725b25954b9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Jordan Whited e059382174
wgengine/magicsock: clean up determineEndpoints docs (#17822)
Updates #cleanup

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
Brad Fitzpatrick fe5501a4e9 wgengine: make getStatus a bit cheaper (less alloc-y)
This removes one of the O(n=peers) allocs in getStatus, as
Engine.getStatus happens more often than Reconfig.

Updates #17814

Change-Id: I8a87fbebbecca3aedadba38e46cc418fd163c2b0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Alex Chan 4c67df42f6 tka: log a better error if there are no chain candidates
Previously if `chains` was empty, it would be passed to `computeActiveAncestor()`,
which would fail with the misleading error "multiple distinct chains".

Updates tailscale/corp#33846

Signed-off-by: Alex Chan <alexc@tailscale.com>
Change-Id: Ib93a755dbdf4127f81cbf69f3eece5a388db31c8
3 weeks ago
Alex Chan c7dbd3987e tka: remove an unused parameter from `computeActiveAncestor`
Updates #cleanup

Change-Id: I86ee7a0d048dafc8c0d030291261240050451721
Signed-off-by: Alex Chan <alexc@tailscale.com>
3 weeks ago
Andrew Lytvynov ae3dff15e4
ipn/ipnlocal: clean up some of the weird locking (#17802)
* lock released early just to call `b.send` when it can call
  `b.sendToLocked` instead
* `UnlockEarly` called to release the lock before trivially fast
  operations, we can wait for a defer there

Updates #11649

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Brad Fitzpatrick 2e265213fd tsnet: fix TestConn to be fast, not flaky
Fixes #17805

Change-Id: I36e37cb0cfb2ea7b2341fd4b9809fbf1dd46d991
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick de733c5951 tailcfg: kill off rest of HairPinning symbols
It was disabled in May 2024 in #12205 (9eb72bb51).

This removes the unused symbols.

Updates #188
Updates tailscale/corp#19106
Updates tailscale/corp#19116

Change-Id: I5208b7b750b18226ed703532ed58c4ea17195a8e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 weeks ago
Brad Fitzpatrick 875a9c526d tsnet: skip a 30s long flaky-ish test on macOS
Updates #17805

Change-Id: I540f50d067eee12e430dfd9de6871dc784fffb8a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
4 weeks ago
Raj Singh bab5e68d0a
net/udprelay: use GetGlobalAddrs and add local port endpoint (#17797)
Use GetGlobalAddrs() to discover all STUN endpoints, handling bad NATs
that create multiple mappings. When MappingVariesByDestIP is true, also
add the first STUN IPv4 address with the relay's local port for static
port mapping scenarios.

Updates #17796

Signed-off-by: Raj Singh <raj@tailscale.com>
4 weeks ago
Tom Proctor d4c5b278b3 cmd/k8s-operator: support workload identity federation
The feature is currently in private alpha, so requires a tailnet feature
flag. Initially focuses on supporting the operator's own auth, because the
operator is the only device we maintain that uses static long-lived
credentials. All other operator-created devices use single-use auth keys.

Testing steps:

* Create a cluster with an API server accessible over public internet
* kubectl get --raw /.well-known/openid-configuration | jq '.issuer'
* Create a federated OAuth client in the Tailscale admin console with:
  * The issuer from the previous step
  * Subject claim `system:serviceaccount:tailscale:operator`
  * Write scopes services, devices:core, auth_keys
  * Tag tag:k8s-operator
* Allow the Tailscale control plane to get the public portion of
  the ServiceAccount token signing key without authentication:
  * kubectl create clusterrolebinding oidc-discovery \
      --clusterrole=system:service-account-issuer-discovery \
      --group=system:unauthenticated
* helm install --set oauth.clientId=... --set oauth.audience=...

Updates #17457

Change-Id: Ib29c85ba97b093c70b002f4f41793ffc02e6c6e9
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
4 weeks ago
Tom Proctor 1ed117dbc0 cmd/k8s-operator: remove Services feature flag detection
Now that the feature is in beta, no one should encounter this error.

Updates #cleanup

Change-Id: I69ed3f460b7f28c44da43ce2f552042f980a0420
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
4 weeks ago
Joe Tsai 5b40f0bc54
cmd/vet: add static vet checker that runs jsontags (#17778)
This starts running the jsontags vet checker on the module.
All existing findings are adding to an allowlist.

Updates tailscale/corp#791

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
4 weeks ago
Joe Tsai 446752687c
cmd/vet: move jsontags into vet (#17777)
The cmd/jsontags is non-idiomatic since it is not a main binary.
Move it to a vet directory, which will eventually contain a vettool binary.

Update tailscale/corp#791

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
4 weeks ago
Joe Tsai 77123a569b
wgengine/netlog: include node OS in logged attributes (#17755)
Include the node's OS with network flow log information.

Refactor the JSON-length computation to be a bit more precise.

Updates tailscale/corp#33352
Fixes tailscale/corp#34030

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
4 weeks ago
Andrew Lytvynov db7dcd516f
Revert "control/controlclient: back out HW key attestation (#17664)" (#17732)
This reverts commit a760cbe33f.

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
1 month ago
M. J. Fromberger 4c856078e4
util/eventbus: block for the subscriber during SubscribeFunc close (#17642)
Prior to this change a SubscriberFunc treated the call to the subscriber's
function as the completion of delivery. But that means when we are closing the
subscriber, that callback could continue to execute for some time after the
close returns.

For channel-based subscribers that works OK because the close takes effect
before the subscriber ever sees the event. To make the two subscriber types
symmetric, we should also wait for the callback to finish before returning.
This ensures that a Close of the client means the same thing with both kinds of
subscriber.

Updates #17638

Change-Id: I82fd31bcaa4e92fab07981ac0e57e6e3a7d9d60b
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
M. J. Fromberger 061e6266cf
util/eventbus: allow logging of slow subscribers (#17705)
Add options to the eventbus.Bus to plumb in a logger.

Route that logger in to the subscriber machinery, and trigger a log message to
it when a subscriber fails to respond to its delivered events for 5s or more.

The log message includes the package, filename, and line number of the call
site that created the subscription.

Add tests that verify this works.

Updates #17680

Change-Id: I0546516476b1e13e6a9cf79f19db2fe55e56c698
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Andrew Lytvynov f522b9dbb7
feature/tpm: protect all TPM handle operations with a mutex (#17708)
In particular on Windows, the `transport.TPMCloser` we get is not safe
for concurrent use. This is especially noticeable because
`tpm.attestationKey.Clone` uses the same open handle as the original
key. So wrap the operations on ak.tpm with a mutex and make a deep copy
with a new connection in Clone.

Updates #15830
Updates #17662
Updates #17644

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
1 month ago
James 'zofrex' Sanderson b6c6960e40
control/controlclient: remove unused reference to mapCtx (#17614)
Updates #cleanup

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
1 month ago
Gesa Stupperich adee8b9180 cmd/tailscale/cli/serve_v2: improve validation error
Specify the app apability that failed the test, instead of the
entire comma-separated list.

Fixes #cleanup

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
1 month ago
M. J. Fromberger 95426b79a9
logtail: avoid racing eventbus subscriptions with shutdown (#17695)
In #17639 we moved the subscription into NewLogger to ensure we would not race
subscribing with shutdown of the eventbus client. Doing so fixed that problem,
but exposed another: As we were only servicing events occasionally when waiting
for the network to come up, we could leave the eventbus to stall in cases where
a number of network deltas arrived later and weren't processed.

To address that, let's separate the concerns: As before, we'll Subscribe early
to avoid conflicts with shutdown; but instead of using the subscriber directly
to determine readiness, we'll keep track of the last-known network state in a
selectable condition that the subscriber updates for us.  When we want to wait,
we'll wait on that condition (or until our context ends), ensuring all the
events get processed in a timely manner.

Updates #17638
Updates #15160

Change-Id: I28339a372be4ab24be46e2834a218874c33a0d2d
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Fernando Serboncini d68513b0db
ipn: add support for HTTP Redirects (#17594)
Adds a new Redirect field to HTTPHandler for serving HTTP redirects
from the Tailscale serve config. The redirect URL supports template
variables ${HOST} and ${REQUEST_URI} that are resolved per request.

By default, it redirects using HTTP Status 302 (Found). For another
redirect status, like 301 - Moved Permanently, pass the HTTP status
code followed by ':' on Redirect, like: "301:https://tailscale.com"

Updates #11252
Updates #11330

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
1 month ago
Erisa A 05d2dcaf49
words: remove a fish (#17704)
Some combinations are problematic in non-fish contexts.

Updates #words

Signed-off-by: Erisa A <erisa@tailscale.com>
1 month ago
Brad Fitzpatrick 8996254647 sessionrecording: fix regression in recent http2 package change
In 3f5c560fd4 I changed to use std net/http's HTTP/2 support,
instead of pulling in x/net/http2.

But I forgot to update DialTLSContext to DialContext, which meant it
was falling back to using the std net.Dialer for its dials, instead
of the passed-in one.

The tests only passed because they were using localhost addresses, so
the std net.Dialer worked. But in prod, where a tsnet Dialer would be
needed, it didn't work, and would time out for 10 seconds before
resorting to the old protocol.

So this fixes the tests to use an isolated in-memory network to prevent
that class of problem in the future. With the test change, the old code
fails and the new code passes.

Thanks to @jasonodonnell for debugging!

Updates #17304
Updates 3f5c560fd4

Change-Id: I3602bafd07dc6548e2c62985af9ac0afb3a0e967
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Brad Fitzpatrick d5a40c01ab cmd/k8s-operator/generate: skip tests if no network or Helm is down
Updates helm/helm#31434

Change-Id: I5eb20e97ff543f883d5646c9324f50f54180851d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Harry Harpham 74f1d8bd87
cmd/tailscale/cli: unhide serve get-config and serve set-config (#17598)
Fixes tailscale/corp#33152

Signed-off-by: Harry Harpham <harry@tailscale.com>
1 month ago
Fernando Serboncini da90e3d8f2
cmd/k8s-operator: rename 'l' variables (#17700)
Single letter 'l' variables can eventually become confusing when
they're rendered in some fonts that make them similar to 1 or I.

Updates #cleanup

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
1 month ago
M. J. Fromberger 06b092388e
ipn/ipnlocal: do not stall event processing for appc route updates (#17663)
A follow-up to #17411. Put AppConnector events into a task queue, as they may
take some time to process. Ensure that the queue is stopped at shutdown so that
cleanup will remain orderly.

Because events are delivered on a separate goroutine, slow processing of an
event does not cause an immediate problem; however, a subscriber that blocks
for a long time will push back on the bus as a whole. See
https://godoc.org/tailscale.com/util/eventbus#hdr-Expected_subscriber_behavior
for more discussion.

Updates #17192
Updates #15160

Change-Id: Ib313cc68aec273daf2b1ad79538266c81ef063e3
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Alex Chan 3c19addc21 tka: rename a mutex to `mu` instead of single-letter `l`
See http://go/no-ell

Updates tailscale/corp#33846

Signed-off-by: Alex Chan <alexc@tailscale.com>

Change-Id: I88ecd9db847e04237c1feab9dfcede5ca1050cc5
1 month ago
Joe Tsai 9ac8105fda
cmd/jsontags: add static analyzer for incompatible `json` struct tags (#17670)
This migrates an internal tool to open source
so that we can run it on the tailscale.com module as well.

This PR does not yet set up a CI to run this analyzer.

Updates tailscale/corp#791

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Joe Tsai 478342a642
wgengine/netlog: embed node information in network flow logs (#17668)
This rewrites the netlog package to support embedding node information in network flow logs.
Some bit of complexity comes in trying to pre-compute the expected size of the log message
after JSON serialization to ensure that we can respect maximum body limits in log uploading.

We also fix a bug in tstun, where we were recording the IP address after SNAT,
which was resulting in non-sensible connection flows being logged.

Updates tailscale/corp#33352

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
Joe Tsai fcb614a53e
cmd/jsonimports: add static analyzer for consistent "json" imports (#17669)
This migrates an internal tool to open source
so that we can run it on the tailscale.com module as well.
We add the "util/safediff" also as a dependency of the tool.

This PR does not yet set up a CI to run this analyzer.

Updates tailscale/corp#791

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
1 month ago
M. J. Fromberger 09a2a1048d
derp: fix an unchecked error in a test (#17694)
Found by staticcheck, the test was calling derphttp.NewClient but not checking
its error result before doing other things to it.

Updates #cleanup

Change-Id: I4ade35a7de7c473571f176e747866bc0ab5774db
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Brad Fitzpatrick edb11e0e60 wgengine/magicsock: fix js/wasm crash regression loading non-existent portmapper
Thanks for the report, @Need-an-AwP!

Fixes #17681
Updates #9394

Change-Id: I2e0b722ef9b460bd7e79499192d1a315504ca84c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Will Norris 0a5ba8280f CODE_OF_CONDUCT.md: update code of conduct
Updates #cleanup

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
M. J. Fromberger db5815fb97
Revert "logtail: avoid racing eventbus subscriptions with Shutdown (#17639)" (#17684)
This reverts commit 4346615d77.
We averted the shutdown race, but will need to service the subscriber even when
we are not waiting for a change so that we do not delay the bus as a whole.

Updates #17638

Change-Id: I5488466ed83f5ad1141c95267f5ae54878a24657
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Mario Minardi 02681732d1
.github: drop branches filter with single asterisk from workflows (#17682)
Drop usage of the branches filter with a single asterisk as this matches
against zero or more characters but not a forward slash, resulting in
PRs to branch names with forwards slashes in them not having these
workflow run against them as expected.

Updates https://github.com/tailscale/corp/issues/33523

Signed-off-by: Mario Minardi <mario@tailscale.com>
1 month ago
Gesa Stupperich d2e4a20f26 ipn/ipnlocal/serve: error when PeerCaps serialisation fails
Also consolidates variable and header naming and amends the
CLI behavior
* multiple app-caps have to be specified as comma-separated
  list
* simple regex-based validation of app capability names is
  carried out during flag parsing

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
1 month ago
Gesa Stupperich d6fa899eba ipn/ipnlocal/serve: remove grant header truncation logic
Given that we filter based on the usercaps argument now, truncation
should not be necessary anymore.

Updates tailscale/corp/#28372

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
1 month ago
Gesa Stupperich 576aacd459 ipn/ipnlocal/serve: add grant headers
Updates tailscale/corp/#28372

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
1 month ago
srwareham f4e2720821
cmd/tailscale/cli: move JetKVM scripts to /userdata/init.d for persistence (#17610)
Updates #16524
Updates jetkvm/rv1106-system#34

Signed-off-by: srwareham <ebriouscoding@gmail.com>
1 month ago
Max Coulombe 34e992f59d
feature/identityfederation: strip query params on clientID (#17666)
Updates #9192

Signed-off-by: mcoulombe <max@tailscale.com>
1 month ago
Patrick O'Doherty a760cbe33f
control/controlclient: back out HW key attestation (#17664)
Temporarily back out the TPM-based hw attestation code while we debug
Windows exceptions.

Updates tailscale/corp#31269

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
1 month ago
M. J. Fromberger 4346615d77
logtail: avoid racing eventbus subscriptions with Shutdown (#17639)
When the eventbus is enabled, set up the subscription for change deltas at the
beginning when the client is created, rather than waiting for the first
awaitInternetUp check.

Otherwise, it is possible for a check to race with the client close in
Shutdown, which triggers a panic.

Updates #17638

Change-Id: I461c07939eca46699072b14b1814ecf28eec750c
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Claus Lensbøl fd0e541e5d
net/tsdial: do not panic if setting the same eventbus twice (#17640)
Updates #17638

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
1 month ago
Claus Lensbøl 7418583e47
health: compare warnable codes to avoid errors on release branch (#17637)
This compares the warnings we actually care about and skips the unstable
warnings and the changes with no warnings.

Fixes #17635

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
1 month ago
Alex Chan d47c697748 ipn/ipnlocal: skip TKA bootstrap request if Tailnet Lock is unavailable
If you run tailscaled without passing a `--statedir`, Tailnet Lock is
unavailable -- we don't have a folder to store the AUMs in.

This causes a lot of unnecessary requests to bootstrap TKA, because
every time the node receives a NetMap with some TKA state, it tries to
bootstrap, fetches the bootstrap TKA state from the control plane, then
fails with the error:

    TKA sync error: bootstrap: network-lock is not supported in this
    configuration, try setting --statedir

We can't prevent the error, but we can skip the control plane request
that immediately gets dropped on the floor.

In local testing, a new node joining a tailnet caused *three* control
plane requests which were unused.

Updates tailscale/corp#19441

Signed-off-by: Alex Chan <alexc@tailscale.com>
1 month ago
Brad Fitzpatrick 8576a802ca util/linuxfw: fix 32-bit arm regression with iptables
This fixes a regression from dd615c8fdd that moved the
newIPTablesRunner constructor from a any-Linux-GOARCH file to one that
was only amd64 and arm64, thus breaking iptables on other platforms
(notably 32-bit "arm", as seen on older Pis running Buster with
iptables)

Tested by hand on a Raspberry Pi 2 w/ Buster + iptables for now, for
lack of automated 32-bit arm tests at the moment. But filed #17629.

Fixes #17623
Updates #17629

Change-Id: Iac1a3d78f35d8428821b46f0fed3f3717891c1bd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 month ago
Patrick O'Doherty 672b1f0e76
feature/tpm: use withSRK to probe TPM availability (#17627)
On some platforms e.g. ChromeOS the owner hierarchy might not always be
available to us. To avoid stale sealing exceptions later we probe to
confirm it's working rather than rely solely on family indicator status.

Updates #17622

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
1 month ago
Patrick O'Doherty 36ad24b20f
feature/tpm: check TPM family data for compatibility (#17624)
Check that the TPM we have opened is advertised as a 2.0 family device
before using it for state sealing / hardware attestation.

Updates #17622

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
1 month ago
Will Norris afaa23c3b4 CODE_OF_CONDUCT: update document title
Updates #cleanup

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
Will Norris c2d62d25c6 CODE_OF_CONDUCT: convert to semantic line breaks
This reformats the existing text to have line breaks at sentences. This
commit contains no textual changes to the code of conduct, but is done
to make any subsequent changes easier to review. (sembr.org)

Also apply prettier formatting for consistency.

Updates #cleanup

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
Alex Chan c59c859f7d tsconsensus: mark several of these tests as known flaky
Updates https://github.com/tailscale/tailscale/issues/15627

Signed-off-by: Alex Chan <alexc@tailscale.com>
1 month ago
Alex Chan 23359dc727 tka: don't try to read AUMs which are partway through being written
Fixes https://github.com/tailscale/tailscale/issues/17600

Signed-off-by: Alex Chan <alexc@tailscale.com>
1 month ago
Alex Chan 2b448f0696 ipn, tka: improve the logging around TKA sync and AUM errors
*   When we do the TKA sync, log whether TKA is enabled and whether
    we want it to be enabled. This would help us see if a node is
    making bootstrap errors.

*   When we fail to look up an AUM locally, log the ID of the AUM
    rather than a generic "file does not exist" error.

    These AUM IDs are cryptographic hashes of the TKA state, which
    itself just contains public keys and signatures. These IDs aren't
    sensitive and logging them is safe.

Signed-off-by: Alex Chan <alexc@tailscale.com>

Updates https://github.com/tailscale/corp/issues/33594
1 month ago
Alex Chan 3944809a11 .github/workflows: pin the google/oss-fuzz GitHub Actions
Updates https://github.com/tailscale/corp/issues/31017

Signed-off-by: Alex Chan <alexc@tailscale.com>
1 month ago
Harry Harpham 675b1c6d54
cmd/tailscale/cli: error when advertising a Service from an untagged node (#17577)
Service hosts must be tagged nodes, meaning it is only valid to
advertise a Service from a machine which has at least one ACL tag.

Fixes tailscale/corp#33197

Signed-off-by: Harry Harpham <harry@tailscale.com>
1 month ago
Claus Lensbøl ab435ce3a6
client/systray: warn users launching the application with sudo (#17595)
If users start the application with sudo, DBUS is likely not available
or will not have the correct endpoints. We want to warn users when doing
this.

Closes #17593

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
1 month ago
M. J. Fromberger 3dde233cd3
ipn/ipnlocal: use eventbus.SubscribeFunc in LocalBackend (#17524)
This does not change which subscriptions are made, it only swaps them to use
the SubscribeFunc API instead of Subscribe.

Updates #15160
Updates #17487

Change-Id: Id56027836c96942206200567a118f8bcf9c07f64
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
1 month ago
Nick Khyl bf47d8e72b VERSION.txt: this is v1.91.0
Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 month ago

@ -0,0 +1,78 @@
#!/usr/bin/env bash
#
# This script sets up cigocacher, but should never fail the build if unsuccessful.
# It expects to run on a GitHub-hosted runner, and connects to cigocached over a
# private Azure network that is configured at the runner group level in GitHub.
#
# Usage: ./action.sh
# Inputs:
# URL: The cigocached server URL.
# Outputs:
# success: Whether cigocacher was set up successfully.
set -euo pipefail
if [ -z "${GITHUB_ACTIONS:-}" ]; then
echo "This script is intended to run within GitHub Actions"
exit 1
fi
if [ -z "${URL:-}" ]; then
echo "No cigocached URL is set, skipping cigocacher setup"
exit 0
fi
curl_and_parse() {
local jq_filter="$1"
local step="$2"
shift 2
local response
local curl_exit
response="$(curl -sSL "$@" 2>&1)" || curl_exit="$?"
if [ "${curl_exit:-0}" -ne "0" ]; then
echo "${step}: ${response}" >&2
return 1
fi
local parsed
local jq_exit
parsed=$(echo "${response}" | jq -e -r "${jq_filter}" 2>&1) || jq_exit=$?
if [ "${jq_exit:-0}" -ne "0" ]; then
echo "${step}: Failed to parse JSON response:" >&2
echo "${response}" >&2
return 1
fi
echo "${parsed}"
return 0
}
JWT="$(curl_and_parse ".value" "Fetching GitHub identity JWT" \
-H "Authorization: Bearer ${ACTIONS_ID_TOKEN_REQUEST_TOKEN}" \
"${ACTIONS_ID_TOKEN_REQUEST_URL}&audience=gocached")" || exit 0
# cigocached serves a TLS cert with an FQDN, but DNS is based on VM name.
HOST_AND_PORT="${URL#http*://}"
FIRST_LABEL="${HOST_AND_PORT/.*/}"
# Save CONNECT_TO for later steps to use.
echo "CONNECT_TO=${HOST_AND_PORT}:${FIRST_LABEL}:" >> "${GITHUB_ENV}"
BODY="$(jq -n --arg jwt "$JWT" '{"jwt": $jwt}')"
CIGOCACHER_TOKEN="$(curl_and_parse ".access_token" "Exchanging token with cigocached" \
--connect-to "${HOST_AND_PORT}:${FIRST_LABEL}:" \
-H "Content-Type: application/json" \
"$URL/auth/exchange-token" \
-d "$BODY")" || exit 0
# Wait until we successfully auth before building cigocacher to ensure we know
# it's worth building.
# TODO(tomhjp): bake cigocacher into runner image and use it for auth.
echo "Fetched cigocacher token successfully"
echo "::add-mask::${CIGOCACHER_TOKEN}"
echo "CIGOCACHER_TOKEN=${CIGOCACHER_TOKEN}" >> "${GITHUB_ENV}"
BIN_PATH="${RUNNER_TEMP:-/tmp}/cigocacher$(go env GOEXE)"
go build -o "${BIN_PATH}" ./cmd/cigocacher
echo "GOCACHEPROG=${BIN_PATH} --cache-dir ${CACHE_DIR} --cigocached-url ${URL} --token ${CIGOCACHER_TOKEN}" >> "${GITHUB_ENV}"
echo "success=true" >> "${GITHUB_OUTPUT}"

@ -0,0 +1,30 @@
name: go-cache
description: Set up build to use cigocacher
inputs:
cigocached-url:
description: URL of the cigocached server
required: true
checkout-path:
description: Path to cloned repository
required: true
cache-dir:
description: Directory to use for caching
required: true
outputs:
success:
description: Whether cigocacher was set up successfully
value: ${{ steps.setup.outputs.success }}
runs:
using: composite
steps:
- name: Setup cigocacher
id: setup
shell: bash
env:
URL: ${{ inputs.cigocached-url }}
CACHE_DIR: ${{ inputs.cache-dir }}
working-directory: ${{ inputs.checkout-path }}
run: .github/actions/go-cache/action.sh

@ -4,8 +4,6 @@ on:
branches: branches:
- main - main
pull_request: pull_request:
branches:
- "*"
jobs: jobs:
deploy: deploy:
runs-on: ubuntu-latest runs-on: ubuntu-latest

@ -2,7 +2,11 @@ name: golangci-lint
on: on:
# For now, only lint pull requests, not the main branches. # For now, only lint pull requests, not the main branches.
pull_request: pull_request:
paths:
- ".github/workflows/golangci-lint.yml"
- "**.go"
- "go.mod"
- "go.sum"
# TODO(andrew): enable for main branch after an initial waiting period. # TODO(andrew): enable for main branch after an initial waiting period.
#push: #push:
# branches: # branches:

@ -10,8 +10,6 @@ on:
- scripts/installer.sh - scripts/installer.sh
- .github/workflows/installer.yml - .github/workflows/installer.yml
pull_request: pull_request:
branches:
- "*"
paths: paths:
- scripts/installer.sh - scripts/installer.sh
- .github/workflows/installer.yml - .github/workflows/installer.yml
@ -60,6 +58,14 @@ jobs:
# Check a few images with wget rather than curl. # Check a few images with wget rather than curl.
- { image: "debian:oldstable-slim", deps: "wget" } - { image: "debian:oldstable-slim", deps: "wget" }
- { image: "debian:sid-slim", deps: "wget" } - { image: "debian:sid-slim", deps: "wget" }
- { image: "debian:stable-slim", deps: "curl" }
- { image: "ubuntu:24.04", deps: "curl" }
- { image: "fedora:latest", deps: "curl" }
# Test TAILSCALE_VERSION pinning on a subset of distros.
# Skip Alpine as community repos don't reliably keep old versions.
- { image: "debian:stable-slim", deps: "curl", version: "1.80.0" }
- { image: "ubuntu:24.04", deps: "curl", version: "1.80.0" }
- { image: "fedora:latest", deps: "curl", version: "1.80.0" }
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ${{ matrix.image }} image: ${{ matrix.image }}
@ -96,12 +102,18 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: run installer - name: run installer
run: scripts/installer.sh run: scripts/installer.sh
env:
TAILSCALE_VERSION: ${{ matrix.version }}
# Package installation can fail in docker because systemd is not running # Package installation can fail in docker because systemd is not running
# as PID 1, so ignore errors at this step. The real check is the # as PID 1, so ignore errors at this step. The real check is the
# `tailscale --version` command below. # `tailscale --version` command below.
continue-on-error: true continue-on-error: true
- name: check tailscale version - name: check tailscale version
run: tailscale --version run: |
tailscale --version
if [ -n "${{ matrix.version }}" ]; then
tailscale --version | grep -q "^${{ matrix.version }}" || { echo "Version mismatch!"; exit 1; }
fi
notify-slack: notify-slack:
needs: test needs: test
runs-on: ubuntu-latest runs-on: ubuntu-latest

@ -2,8 +2,7 @@ name: request-dataplane-review
on: on:
pull_request: pull_request:
branches: types: [ opened, synchronize, reopened, ready_for_review ]
- "*"
paths: paths:
- ".github/workflows/request-dataplane-review.yml" - ".github/workflows/request-dataplane-review.yml"
- "**/*derp*" - "**/*derp*"
@ -12,6 +11,7 @@ on:
jobs: jobs:
request-dataplane-review: request-dataplane-review:
if: github.event.pull_request.draft == false
name: Request Dataplane Review name: Request Dataplane Review
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:

@ -136,21 +136,20 @@ jobs:
key: ${{ needs.gomod-cache.outputs.cache-key }} key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true enableCrossOsArchive: true
- name: Restore Cache - name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
# Note: unlike the other setups, this is only grabbing the mod download # Note: this is only restoring the build cache. Mod cache is shared amongst
# cache, rather than the whole mod directory, as the download cache # all jobs in the workflow.
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: | path: |
~/.cache/go-build ~/.cache/go-build
~\AppData\Local\go-build ~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: | restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }} ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2- ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-
- name: build all - name: build all
if: matrix.buildflags == '' # skip on race builder if: matrix.buildflags == '' # skip on race builder
working-directory: src working-directory: src
@ -206,12 +205,26 @@ jobs:
shell: bash shell: bash
run: | run: |
find $(go env GOCACHE) -type f -mmin +90 -delete find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
windows: windows:
# windows-8vpu is a 2022 GitHub-managed runner in our permissions:
# org with 8 cores and 32 GB of RAM: id-token: write # This is required for requesting the GitHub action identity JWT that can auth to cigocached
# https://github.com/organizations/tailscale/settings/actions/github-hosted-runners/1 contents: read # This is required for actions/checkout
runs-on: windows-8vcpu # ci-windows-github-1 is a 2022 GitHub-managed runner in our org with 8 cores
# and 32 GB of RAM. It is connected to a private Azure VNet that hosts cigocached.
# https://github.com/organizations/tailscale/settings/actions/github-hosted-runners/5
runs-on: ci-windows-github-1
needs: gomod-cache needs: gomod-cache
name: Windows (${{ matrix.name || matrix.shard}}) name: Windows (${{ matrix.name || matrix.shard}})
strategy: strategy:
@ -220,8 +233,6 @@ jobs:
include: include:
- key: "win-bench" - key: "win-bench"
name: "benchmarks" name: "benchmarks"
- key: "win-tool-go"
name: "./tool/go"
- key: "win-shard-1-2" - key: "win-shard-1-2"
shard: "1/2" shard: "1/2"
- key: "win-shard-2-2" - key: "win-shard-2-2"
@ -230,44 +241,31 @@ jobs:
- name: checkout - name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
path: src path: ${{ github.workspace }}/src
- name: Install Go - name: Install Go
if: matrix.key != 'win-tool-go'
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with: with:
go-version-file: src/go.mod go-version-file: ${{ github.workspace }}/src/go.mod
cache: false cache: false
- name: Restore Go module cache - name: Restore Go module cache
if: matrix.key != 'win-tool-go'
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
path: gomodcache path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }} key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true enableCrossOsArchive: true
- name: Restore Cache - name: Set up cigocacher
if: matrix.key != 'win-tool-go' id: cigocacher-setup
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: ./src/.github/actions/go-cache
with: with:
path: | checkout-path: ${{ github.workspace }}/src
~/.cache/go-build cache-dir: ${{ github.workspace }}/cigocacher
~\AppData\Local\go-build cigocached-url: ${{ vars.CIGOCACHED_AZURE_URL }}
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ matrix.key }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ matrix.key }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ matrix.key }}-go-2-
- name: test-tool-go
if: matrix.key == 'win-tool-go'
working-directory: src
run: ./tool/go version
- name: test - name: test
if: matrix.key != 'win-bench' && matrix.key != 'win-tool-go' # skip on bench builder if: matrix.key != 'win-bench' # skip on bench builder
working-directory: src working-directory: src
run: go run ./cmd/testwrapper sharded:${{ matrix.shard }} run: go run ./cmd/testwrapper sharded:${{ matrix.shard }}
@ -279,12 +277,24 @@ jobs:
# the equals signs cause great confusion. # the equals signs cause great confusion.
run: go test ./... -bench . -benchtime 1x -run "^$" run: go test ./... -bench . -benchtime 1x -run "^$"
- name: Tidy cache - name: Print stats
if: matrix.key != 'win-tool-go'
working-directory: src
shell: bash shell: bash
if: steps.cigocacher-setup.outputs.success == 'true'
run: | run: |
find $(go env GOCACHE) -type f -mmin +90 -delete curl -sSL --connect-to "${CONNECT_TO}" -H "Authorization: Bearer ${CIGOCACHER_TOKEN}" "${{ vars.CIGOCACHED_AZURE_URL }}/session/stats" | jq .
win-tool-go:
runs-on: windows-latest
needs: gomod-cache
name: Windows (win-tool-go)
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: src
- name: test-tool-go
working-directory: src
run: ./tool/go version
privileged: privileged:
needs: gomod-cache needs: gomod-cache
@ -376,28 +386,26 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
path: src path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: Restore Go module cache - name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
path: gomodcache path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }} key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-
- name: build all - name: build all
working-directory: src working-directory: src
run: ./tool/go build ./cmd/... run: ./tool/go build ./cmd/...
@ -418,6 +426,17 @@ jobs:
shell: bash shell: bash
run: | run: |
find $(go env GOCACHE) -type f -mmin +90 -delete find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
ios: # similar to cross above, but iOS can't build most of the repo. So, just ios: # similar to cross above, but iOS can't build most of the repo. So, just
# make it build a few smoke packages. # make it build a few smoke packages.
@ -466,28 +485,26 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
path: src path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: Restore Go module cache - name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
path: gomodcache path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }} key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-
- name: build core - name: build core
working-directory: src working-directory: src
run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled
@ -501,6 +518,17 @@ jobs:
shell: bash shell: bash
run: | run: |
find $(go env GOCACHE) -type f -mmin +90 -delete find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
android: android:
# similar to cross above, but android fails to build a few pieces of the # similar to cross above, but android fails to build a few pieces of the
@ -538,28 +566,26 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
path: src path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-go-2-
- name: Restore Go module cache - name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
path: gomodcache path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }} key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-js-wasm-go-
- name: build tsconnect client - name: build tsconnect client
working-directory: src working-directory: src
run: ./tool/go build ./cmd/tsconnect/wasm ./cmd/tailscale/cli run: ./tool/go build ./cmd/tsconnect/wasm ./cmd/tailscale/cli
@ -578,6 +604,17 @@ jobs:
shell: bash shell: bash
run: | run: |
find $(go env GOCACHE) -type f -mmin +90 -delete find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
tailscale_go: # Subset of tests that depend on our custom Go toolchain. tailscale_go: # Subset of tests that depend on our custom Go toolchain.
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
@ -613,7 +650,9 @@ jobs:
steps: steps:
- name: build fuzzers - name: build fuzzers
id: build id: build
uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@master # As of 21 October 2025, this repo doesn't tag releases, so this commit
# hash is just the tip of master.
uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@1242ccb5b6352601e73c00f189ac2ae397242264
# continue-on-error makes steps.build.conclusion be 'success' even if # continue-on-error makes steps.build.conclusion be 'success' even if
# steps.build.outcome is 'failure'. This means this step does not # steps.build.outcome is 'failure'. This means this step does not
# contribute to the job's overall pass/fail evaluation. # contribute to the job's overall pass/fail evaluation.
@ -643,7 +682,9 @@ jobs:
# report a failure because TS_FUZZ_CURRENTLY_BROKEN is set to the wrong # report a failure because TS_FUZZ_CURRENTLY_BROKEN is set to the wrong
# value. # value.
if: steps.build.outcome == 'success' if: steps.build.outcome == 'success'
uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@master # As of 21 October 2025, this repo doesn't tag releases, so this commit
# hash is just the tip of master.
uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@1242ccb5b6352601e73c00f189ac2ae397242264
with: with:
oss-fuzz-project-name: 'tailscale' oss-fuzz-project-name: 'tailscale'
fuzz-seconds: 150 fuzz-seconds: 150
@ -699,6 +740,7 @@ jobs:
run: | run: |
pkgs=$(./tool/go list ./... | grep -Ev 'dnsfallback|k8s-operator|xdp') pkgs=$(./tool/go list ./... | grep -Ev 'dnsfallback|k8s-operator|xdp')
./tool/go generate $pkgs ./tool/go generate $pkgs
git add -N . # ensure untracked files are noticed
echo echo
echo echo
git diff --name-only --exit-code || (echo "The files above need updating. Please run 'go generate'."; exit 1) git diff --name-only --exit-code || (echo "The files above need updating. Please run 'go generate'."; exit 1)

@ -0,0 +1,38 @@
name: tailscale.com/cmd/vet
env:
HOME: ${{ github.workspace }}
# GOMODCACHE is the same definition on all OSes. Within the workspace, we use
# toplevel directories "src" (for the checked out source code), and "gomodcache"
# and other caches as siblings to follow.
GOMODCACHE: ${{ github.workspace }}/gomodcache
on:
push:
branches:
- main
- "release-branch/*"
paths:
- "**.go"
pull_request:
paths:
- "**.go"
jobs:
vet:
runs-on: [ self-hosted, linux ]
timeout-minutes: 5
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: src
- name: Build 'go vet' tool
working-directory: src
run: ./tool/go build -o /tmp/vettool tailscale.com/cmd/vet
- name: Run 'go vet'
working-directory: src
run: ./tool/go vet -vettool=/tmp/vettool tailscale.com/...

@ -3,8 +3,6 @@ on:
workflow_dispatch: workflow_dispatch:
# For now, only run on requests, not the main branches. # For now, only run on requests, not the main branches.
pull_request: pull_request:
branches:
- "*"
paths: paths:
- "client/web/**" - "client/web/**"
- ".github/workflows/webclient.yml" - ".github/workflows/webclient.yml"

@ -1,147 +1,103 @@
# Contributor Covenant Code of Conduct # Tailscale Community Code of Conduct
## Our Pledge ## Our Pledge
We are committed to creating an open, welcoming, diverse, inclusive, We are committed to creating an open, welcoming, diverse, inclusive, healthy and respectful community.
healthy and respectful community. Unacceptable, harmful and inappropriate behavior will not be tolerated.
## Our Standards ## Our Standards
Examples of behavior that contributes to a positive environment for our Examples of behavior that contributes to a positive environment for our community include:
community include:
* Demonstrating empathy and kindness toward other people. - Demonstrating empathy and kindness toward other people.
* Being respectful of differing opinions, viewpoints, and experiences. - Being respectful of differing opinions, viewpoints, and experiences.
* Giving and gracefully accepting constructive feedback. - Giving and gracefully accepting constructive feedback.
* Accepting responsibility and apologizing to those affected by our - Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience.
mistakes, and learning from the experience. - Focusing on what is best not just for us as individuals, but for the overall community.
* Focusing on what is best not just for us as individuals, but for the
overall community.
Examples of unacceptable behavior include without limitation: Examples of unacceptable behavior include without limitation:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind. - The use of language, imagery or emojis (collectively "content") that is racist, sexist, homophobic, transphobic, or otherwise harassing or discriminatory based on any protected characteristic.
* The use of violent, intimidating or bullying language or imagery. - The use of sexualized content and sexual attention or advances of any kind.
* Trolling, insulting or derogatory comments, and personal or - The use of violent, intimidating or bullying content.
political attacks. - Trolling, concern trolling, insulting or derogatory comments, and personal or political attacks.
* Public or private harassment. - Public or private harassment.
* Publishing others' private information, such as a physical or email - Publishing others' personal information, such as a photo, physical address, email address, online profile information, or other personal information, without their explicit permission or with the intent to bully or harass the other person.
address, without their explicit permission. - Posting deep fake or other AI generated content about or involving another person without the explicit permission.
* Spamming community channels and members, such as sending repeat messages, - Spamming community channels and members, such as sending repeat messages, low-effort content, or automated messages.
low-effort content, or automated messages. - Phishing or any similar activity.
* Phishing or any similar activity; - Distributing or promoting malware.
* Distributing or promoting malware; - The use of any coded or suggestive content to hide or provoke otherwise unacceptable behavior.
* Other conduct which could reasonably be considered inappropriate in a - Other conduct which could reasonably be considered harmful, illegal, or inappropriate in a professional setting.
professional setting.
Please also see the Tailscale Acceptable Use Policy, available at [tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup).
Please also see the Tailscale Acceptable Use Policy, available at
[tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup). ## Reporting Incidents
# Reporting Incidents Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to Tailscale directly via <info@tailscale.com>, or to the community leaders or moderators via DM or similar.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to Tailscale directly via info@tailscale.com, or to
the community leaders or moderators via DM or similar.
All complaints will be reviewed and investigated promptly and fairly. All complaints will be reviewed and investigated promptly and fairly.
We will respect the privacy and safety of the reporter of any issues. We will respect the privacy and safety of the reporter of any issues.
Please note that this community is not moderated by staff 24/7, and we Please note that this community is not moderated by staff 24/7, and we do not have, and do not undertake, any obligation to prescreen, monitor, edit, or remove any content or data, or to actively seek facts or circumstances indicating illegal activity.
do not have, and do not undertake, any obligation to prescreen, monitor, While we strive to keep the community safe and welcoming, moderation may not be immediate at all hours.
edit, or remove any content or data, or to actively seek facts or
circumstances indicating illegal activity. While we strive to keep the
community safe and welcoming, moderation may not be immediate at all hours.
If you encounter any issues, report them using the appropriate channels. If you encounter any issues, report them using the appropriate channels.
## Enforcement ## Enforcement Guidelines
Community leaders and moderators are responsible for clarifying and
enforcing our standards of acceptable behavior and will take appropriate
and fair corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Community leaders and moderators have the right and responsibility to remove, Community leaders and moderators are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
edit, or reject comments, commits, code, wiki edits, issues, and other
contributions that are not aligned to this Community Code of Conduct.
Tailscale retains full discretion to take action (or not) in response
to a violation of these guidelines with or without notice or liability
to you. We will interpret our policies and resolve disputes in favor of
protecting users, customers, the public, our community and our company,
as a whole.
## Enforcement Guidelines Community leaders and moderators have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Community Code of Conduct.
Tailscale retains full discretion to take action (or not) in response to a violation of these guidelines with or without notice or liability to you.
We will interpret our policies and resolve disputes in favor of protecting users, customers, the public, our community and our company, as a whole.
Community leaders will follow these Community Impact Guidelines in Community leaders will follow these community enforcement guidelines in determining the consequences for any action they deem in violation of this Code of Conduct,
determining the consequences for any action they deem in violation of and retain full discretion to apply the enforcement guidelines as necessary depending on the circumstances:
this Code of Conduct:
### 1. Correction ### 1. Correction
Community Impact: Use of inappropriate language or other behavior Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate.
providing clarity around the nature of the violation and an A public apology may be requested.
explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning ### 2. Warning
Community Impact: A violation through a single incident or series Community Impact: A violation through a single incident or series of actions.
of actions.
Consequence: A warning with consequences for continued Consequence: A warning with consequences for continued behavior.
behavior. No interaction with the people involved, including No interaction with the people involved, including unsolicited interaction with those enforcing this Community Code of Conduct, for a specified period of time.
unsolicited interaction with those enforcing this Community Code of Conduct, This includes avoiding interactions in community spaces as well as external channels like social media.
for a specified period of time. This includes avoiding interactions in Violating these terms may lead to a temporary or permanent ban.
community spaces as well as external channels like social
media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban ### 3. Temporary Ban
Community Impact: A serious violation of community standards, Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time.
public communication with the community for a specified period of No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of Conduct,
is allowed during this period. Violating these terms may lead to a permanent ban.
### 4. Permanent Ban ### 4. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
standards, including sustained inappropriate behavior, harassment of
an individual, or aggression toward or disparagement of
classes of individuals.
Consequence: A permanent ban from any sort of public interaction Consequence: A permanent ban from any sort of public interaction within the community.
within the community.
## Acceptable Use Policy ## Acceptable Use Policy
Violation of this Community Code of Conduct may also violate the Violation of this Community Code of Conduct may also violate the Tailscale Acceptable Use Policy, which may result in suspension or termination of your Tailscale account.
Tailscale Acceptable Use Policy, which may result in suspension or For more information, please see the Tailscale Acceptable Use Policy, available at [tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup).
termination of your Tailscale account. For more information, please
see the Tailscale Acceptable Use Policy, available at
[tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup).
## Privacy ## Privacy
Please see the Tailscale [Privacy Policy](http://tailscale.com/privacy-policy) Please see the Tailscale [Privacy Policy](https://tailscale.com/privacy-policy) for more information about how Tailscale collects, uses, discloses and protects information.
for more information about how Tailscale collects, uses, discloses and protects
information.
## Attribution ## Attribution
This Code of Conduct is adapted from the [Contributor This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at <https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
Covenant][homepage], version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org [homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the For answers to common questions about this code of conduct, see the FAQ at <https://www.contributor-covenant.org/faq>.
FAQ at https://www.contributor-covenant.org/faq. Translations are Translations are available at <https://www.contributor-covenant.org/translations>.
available at https://www.contributor-covenant.org/translations.

@ -1 +1 @@
1.89.0 1.93.0

@ -16,9 +16,9 @@ import (
"net/netip" "net/netip"
"slices" "slices"
"strings" "strings"
"sync"
"time" "time"
"tailscale.com/syncs"
"tailscale.com/types/appctype" "tailscale.com/types/appctype"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/types/views" "tailscale.com/types/views"
@ -139,7 +139,7 @@ type AppConnector struct {
hasStoredRoutes bool hasStoredRoutes bool
// mu guards the fields that follow // mu guards the fields that follow
mu sync.Mutex mu syncs.Mutex
// domains is a map of lower case domain names with no trailing dot, to an // domains is a map of lower case domain names with no trailing dot, to an
// ordered list of resolved IP addresses. // ordered list of resolved IP addresses.
@ -203,12 +203,12 @@ func NewAppConnector(c Config) *AppConnector {
ac.wildcards = c.RouteInfo.Wildcards ac.wildcards = c.RouteInfo.Wildcards
ac.controlRoutes = c.RouteInfo.Control ac.controlRoutes = c.RouteInfo.Control
} }
ac.writeRateMinute = newRateLogger(time.Now, time.Minute, func(c int64, s time.Time, l int64) { ac.writeRateMinute = newRateLogger(time.Now, time.Minute, func(c int64, s time.Time, ln int64) {
ac.logf("routeInfo write rate: %d in minute starting at %v (%d routes)", c, s, l) ac.logf("routeInfo write rate: %d in minute starting at %v (%d routes)", c, s, ln)
metricStoreRoutes(c, l) metricStoreRoutes(c, ln)
}) })
ac.writeRateDay = newRateLogger(time.Now, 24*time.Hour, func(c int64, s time.Time, l int64) { ac.writeRateDay = newRateLogger(time.Now, 24*time.Hour, func(c int64, s time.Time, ln int64) {
ac.logf("routeInfo write rate: %d in 24 hours starting at %v (%d routes)", c, s, l) ac.logf("routeInfo write rate: %d in 24 hours starting at %v (%d routes)", c, s, ln)
}) })
return ac return ac
} }
@ -510,8 +510,8 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
slices.SortFunc(e.domains[domain], compareAddr) slices.SortFunc(e.domains[domain], compareAddr)
} }
func compareAddr(l, r netip.Addr) int { func compareAddr(a, b netip.Addr) int {
return l.Compare(r) return a.Compare(b)
} }
// routesWithout returns a without b where a and b // routesWithout returns a without b where a and b

@ -0,0 +1,61 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"errors"
"net/netip"
"go4.org/netipx"
)
// errPoolExhausted is returned when there are no more addresses to iterate over.
var errPoolExhausted = errors.New("ip pool exhausted")
// ippool allows for iteration over all the addresses within a netipx.IPSet.
// netipx.IPSet has a Ranges call that returns the "minimum and sorted set of IP ranges that covers [the set]".
// netipx.IPRange is "an inclusive range of IP addresses from the same address family.". So we can iterate over
// all the addresses in the set by keeping a track of the last address we returned, calling Next on the last address
// to get the new one, and if we run off the edge of the current range, starting on the next one.
type ippool struct {
// ranges defines the addresses in the pool
ranges []netipx.IPRange
// last is internal tracking of which the last address provided was.
last netip.Addr
// rangeIdx is internal tracking of which netipx.IPRange from the IPSet we are currently on.
rangeIdx int
}
func newIPPool(ipset *netipx.IPSet) *ippool {
if ipset == nil {
return &ippool{}
}
return &ippool{ranges: ipset.Ranges()}
}
// next returns the next address from the set, or errPoolExhausted if we have
// iterated over the whole set.
func (ipp *ippool) next() (netip.Addr, error) {
if ipp.rangeIdx >= len(ipp.ranges) {
// ipset is empty or we have iterated off the end
return netip.Addr{}, errPoolExhausted
}
if !ipp.last.IsValid() {
// not initialized yet
ipp.last = ipp.ranges[0].From()
return ipp.last, nil
}
currRange := ipp.ranges[ipp.rangeIdx]
if ipp.last == currRange.To() {
// then we need to move to the next range
ipp.rangeIdx++
if ipp.rangeIdx >= len(ipp.ranges) {
return netip.Addr{}, errPoolExhausted
}
ipp.last = ipp.ranges[ipp.rangeIdx].From()
return ipp.last, nil
}
ipp.last = ipp.last.Next()
return ipp.last, nil
}

@ -0,0 +1,60 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"errors"
"net/netip"
"testing"
"go4.org/netipx"
"tailscale.com/util/must"
)
func TestNext(t *testing.T) {
a := ippool{}
_, err := a.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
var isb netipx.IPSetBuilder
ipset := must.Get(isb.IPSet())
b := newIPPool(ipset)
_, err = b.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("192.168.0.0"), netip.MustParseAddr("192.168.0.2")))
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("200.0.0.0"), netip.MustParseAddr("200.0.0.0")))
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("201.0.0.0"), netip.MustParseAddr("201.0.0.1")))
ipset = must.Get(isb.IPSet())
c := newIPPool(ipset)
expected := []string{
"192.168.0.0",
"192.168.0.1",
"192.168.0.2",
"200.0.0.0",
"201.0.0.0",
"201.0.0.1",
}
for i, want := range expected {
addr, err := c.next()
if err != nil {
t.Fatal(err)
}
if addr != netip.MustParseAddr(want) {
t.Fatalf("next call %d want: %s, got: %v", i, want, addr)
}
}
_, err = c.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
_, err = c.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
}

@ -31,11 +31,11 @@ func TestDoesNotOverwriteIrregularFiles(t *testing.T) {
// The least troublesome thing to make that is not a file is a unix socket. // The least troublesome thing to make that is not a file is a unix socket.
// Making a null device sadly requires root. // Making a null device sadly requires root.
l, err := net.ListenUnix("unix", &net.UnixAddr{Name: path, Net: "unix"}) ln, err := net.ListenUnix("unix", &net.UnixAddr{Name: path, Net: "unix"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
defer l.Close() defer ln.Close()
err = WriteFile(path, []byte("hello"), 0644) err = WriteFile(path, []byte("hello"), 0644)
if err == nil { if err == nil {

@ -44,7 +44,7 @@ var (
) )
func replaceFileW(replaced *uint16, replacement *uint16, backup *uint16, flags uint32, exclude unsafe.Pointer, reserved unsafe.Pointer) (err error) { func replaceFileW(replaced *uint16, replacement *uint16, backup *uint16, flags uint32, exclude unsafe.Pointer, reserved unsafe.Pointer) (err error) {
r1, _, e1 := syscall.Syscall6(procReplaceFileW.Addr(), 6, uintptr(unsafe.Pointer(replaced)), uintptr(unsafe.Pointer(replacement)), uintptr(unsafe.Pointer(backup)), uintptr(flags), uintptr(exclude), uintptr(reserved)) r1, _, e1 := syscall.SyscallN(procReplaceFileW.Addr(), uintptr(unsafe.Pointer(replaced)), uintptr(unsafe.Pointer(replacement)), uintptr(unsafe.Pointer(backup)), uintptr(flags), uintptr(exclude), uintptr(reserved))
if int32(r1) == 0 { if int32(r1) == 0 {
err = errnoErr(e1) err = errnoErr(e1)
} }

@ -24,7 +24,7 @@ type fakeBIRD struct {
func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD { func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD {
sock := filepath.Join(t.TempDir(), "sock") sock := filepath.Join(t.TempDir(), "sock")
l, err := net.Listen("unix", sock) ln, err := net.Listen("unix", sock)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -33,7 +33,7 @@ func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD {
pe[p] = false pe[p] = false
} }
return &fakeBIRD{ return &fakeBIRD{
Listener: l, Listener: ln,
protocolsEnabled: pe, protocolsEnabled: pe,
sock: sock, sock: sock,
} }
@ -123,12 +123,12 @@ type hangingListener struct {
func newHangingListener(t *testing.T) *hangingListener { func newHangingListener(t *testing.T) *hangingListener {
sock := filepath.Join(t.TempDir(), "sock") sock := filepath.Join(t.TempDir(), "sock")
l, err := net.Listen("unix", sock) ln, err := net.Listen("unix", sock)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
return &hangingListener{ return &hangingListener{
Listener: l, Listener: ln,
t: t, t: t,
done: make(chan struct{}), done: make(chan struct{}),
sock: sock, sock: sock,

@ -38,6 +38,7 @@ import (
"tailscale.com/net/udprelay/status" "tailscale.com/net/udprelay/status"
"tailscale.com/paths" "tailscale.com/paths"
"tailscale.com/safesocket" "tailscale.com/safesocket"
"tailscale.com/syncs"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/appctype" "tailscale.com/types/appctype"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
@ -596,6 +597,19 @@ func (lc *Client) DebugResultJSON(ctx context.Context, action string) (any, erro
return x, nil return x, nil
} }
// QueryOptionalFeatures queries the optional features supported by the Tailscale daemon.
func (lc *Client) QueryOptionalFeatures(ctx context.Context) (*apitype.OptionalFeatures, error) {
body, err := lc.send(ctx, "POST", "/localapi/v0/debug-optional-features", 200, nil)
if err != nil {
return nil, fmt.Errorf("error %w: %s", err, body)
}
var x apitype.OptionalFeatures
if err := json.Unmarshal(body, &x); err != nil {
return nil, err
}
return &x, nil
}
// SetDevStoreKeyValue set a statestore key/value. It's only meant for development. // SetDevStoreKeyValue set a statestore key/value. It's only meant for development.
// The schema (including when keys are re-read) is not a stable interface. // The schema (including when keys are re-read) is not a stable interface.
func (lc *Client) SetDevStoreKeyValue(ctx context.Context, key, value string) error { func (lc *Client) SetDevStoreKeyValue(ctx context.Context, key, value string) error {
@ -1350,7 +1364,7 @@ type IPNBusWatcher struct {
httpRes *http.Response httpRes *http.Response
dec *json.Decoder dec *json.Decoder
mu sync.Mutex mu syncs.Mutex
closed bool closed bool
} }
@ -1387,6 +1401,23 @@ func (lc *Client) SuggestExitNode(ctx context.Context) (apitype.ExitNodeSuggesti
return decodeJSON[apitype.ExitNodeSuggestionResponse](body) return decodeJSON[apitype.ExitNodeSuggestionResponse](body)
} }
// CheckSOMarkInUse reports whether the socket mark option is in use. This will only
// be true if tailscale is running on Linux and tailscaled uses SO_MARK.
func (lc *Client) CheckSOMarkInUse(ctx context.Context) (bool, error) {
body, err := lc.get200(ctx, "/localapi/v0/check-so-mark-in-use")
if err != nil {
return false, err
}
var res struct {
UseSOMark bool `json:"useSoMark"`
}
if err := json.Unmarshal(body, &res); err != nil {
return false, fmt.Errorf("invalid JSON from check-so-mark-in-use: %w", err)
}
return res.UseSOMark, nil
}
// ShutdownTailscaled requests a graceful shutdown of tailscaled. // ShutdownTailscaled requests a graceful shutdown of tailscaled.
func (lc *Client) ShutdownTailscaled(ctx context.Context) error { func (lc *Client) ShutdownTailscaled(ctx context.Context) error {
_, err := lc.send(ctx, "POST", "/localapi/v0/shutdown", 200, nil) _, err := lc.send(ctx, "POST", "/localapi/v0/shutdown", 200, nil)

@ -158,6 +158,18 @@ func init() {
// onReady is called by the systray package when the menu is ready to be built. // onReady is called by the systray package when the menu is ready to be built.
func (menu *Menu) onReady() { func (menu *Menu) onReady() {
log.Printf("starting") log.Printf("starting")
if os.Getuid() == 0 || os.Getuid() != os.Geteuid() || os.Getenv("SUDO_USER") != "" || os.Getenv("DOAS_USER") != "" {
fmt.Fprintln(os.Stderr, `
It appears that you might be running the systray with sudo/doas.
This can lead to issues with D-Bus, and should be avoided.
The systray application should be run with the same user as your desktop session.
This usually means that you should run the application like:
tailscale systray
See https://tailscale.com/kb/1597/linux-systray for more information.`)
}
setAppIcon(disconnected) setAppIcon(disconnected)
menu.rebuild() menu.rebuild()
@ -500,7 +512,7 @@ func (menu *Menu) watchIPNBus() {
} }
func (menu *Menu) watchIPNBusInner() error { func (menu *Menu) watchIPNBusInner() error {
watcher, err := menu.lc.WatchIPNBus(menu.bgCtx, ipn.NotifyNoPrivateKeys) watcher, err := menu.lc.WatchIPNBus(menu.bgCtx, 0)
if err != nil { if err != nil {
return fmt.Errorf("watching ipn bus: %w", err) return fmt.Errorf("watching ipn bus: %w", err)
} }

@ -94,3 +94,13 @@ type DNSQueryResponse struct {
// Resolvers is the list of resolvers that the forwarder deemed able to resolve the query. // Resolvers is the list of resolvers that the forwarder deemed able to resolve the query.
Resolvers []*dnstype.Resolver Resolvers []*dnstype.Resolver
} }
// OptionalFeatures describes which optional features are enabled in the build.
type OptionalFeatures struct {
// Features is the map of optional feature names to whether they are
// enabled.
//
// Disabled features may be absent from the map. (That is, false values
// are not guaranteed to be present.)
Features map[string]bool
}

@ -34,10 +34,10 @@
"prettier-plugin-organize-imports": "^3.2.2", "prettier-plugin-organize-imports": "^3.2.2",
"tailwindcss": "^3.3.3", "tailwindcss": "^3.3.3",
"typescript": "^5.3.3", "typescript": "^5.3.3",
"vite": "^5.1.7", "vite": "^5.4.21",
"vite-plugin-svgr": "^4.2.0", "vite-plugin-svgr": "^4.2.0",
"vite-tsconfig-paths": "^3.5.0", "vite-tsconfig-paths": "^3.5.0",
"vitest": "^1.3.1" "vitest": "^1.6.1"
}, },
"resolutions": { "resolutions": {
"@typescript-eslint/eslint-plugin": "^6.2.1", "@typescript-eslint/eslint-plugin": "^6.2.1",

@ -66,7 +66,7 @@ export default function useExitNodes(node: NodeData, filter?: string) {
// match from a list of exit node `options` to `nodes`. // match from a list of exit node `options` to `nodes`.
const addBestMatchNode = ( const addBestMatchNode = (
options: ExitNode[], options: ExitNode[],
name: (l: ExitNodeLocation) => string name: (loc: ExitNodeLocation) => string
) => { ) => {
const bestNode = highestPriorityNode(options) const bestNode = highestPriorityNode(options)
if (!bestNode || !bestNode.Location) { if (!bestNode || !bestNode.Location) {
@ -86,7 +86,7 @@ export default function useExitNodes(node: NodeData, filter?: string) {
locationNodesMap.forEach( locationNodesMap.forEach(
// add one node per country // add one node per country
(countryNodes) => (countryNodes) =>
addBestMatchNode(flattenMap(countryNodes), (l) => l.Country) addBestMatchNode(flattenMap(countryNodes), (loc) => loc.Country)
) )
} else { } else {
// Otherwise, show the best match on a city-level, // Otherwise, show the best match on a city-level,
@ -97,12 +97,12 @@ export default function useExitNodes(node: NodeData, filter?: string) {
countryNodes.forEach( countryNodes.forEach(
// add one node per city // add one node per city
(cityNodes) => (cityNodes) =>
addBestMatchNode(cityNodes, (l) => `${l.Country}: ${l.City}`) addBestMatchNode(cityNodes, (loc) => `${loc.Country}: ${loc.City}`)
) )
// add the "Country: Best Match" node // add the "Country: Best Match" node
addBestMatchNode( addBestMatchNode(
flattenMap(countryNodes), flattenMap(countryNodes),
(l) => `${l.Country}: Best Match` (loc) => `${loc.Country}: Best Match`
) )
}) })
} }

@ -1130,120 +1130,120 @@
resolved "https://registry.yarnpkg.com/@cush/relative/-/relative-1.0.0.tgz#8cd1769bf9bde3bb27dac356b1bc94af40f6cc16" resolved "https://registry.yarnpkg.com/@cush/relative/-/relative-1.0.0.tgz#8cd1769bf9bde3bb27dac356b1bc94af40f6cc16"
integrity sha512-RpfLEtTlyIxeNPGKcokS+p3BZII/Q3bYxryFRglh5H3A3T8q9fsLYm72VYAMEOOIBLEa8o93kFLiBDUWKrwXZA== integrity sha512-RpfLEtTlyIxeNPGKcokS+p3BZII/Q3bYxryFRglh5H3A3T8q9fsLYm72VYAMEOOIBLEa8o93kFLiBDUWKrwXZA==
"@esbuild/aix-ppc64@0.19.12": "@esbuild/aix-ppc64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/aix-ppc64/-/aix-ppc64-0.19.12.tgz#d1bc06aedb6936b3b6d313bf809a5a40387d2b7f" resolved "https://registry.yarnpkg.com/@esbuild/aix-ppc64/-/aix-ppc64-0.21.5.tgz#c7184a326533fcdf1b8ee0733e21c713b975575f"
integrity sha512-bmoCYyWdEL3wDQIVbcyzRyeKLgk2WtWLTWz1ZIAZF/EGbNOwSA6ew3PftJ1PqMiOOGu0OyFMzG53L0zqIpPeNA== integrity sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==
"@esbuild/android-arm64@0.19.12": "@esbuild/android-arm64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/android-arm64/-/android-arm64-0.19.12.tgz#7ad65a36cfdb7e0d429c353e00f680d737c2aed4" resolved "https://registry.yarnpkg.com/@esbuild/android-arm64/-/android-arm64-0.21.5.tgz#09d9b4357780da9ea3a7dfb833a1f1ff439b4052"
integrity sha512-P0UVNGIienjZv3f5zq0DP3Nt2IE/3plFzuaS96vihvD0Hd6H/q4WXUGpCxD/E8YrSXfNyRPbpTq+T8ZQioSuPA== integrity sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==
"@esbuild/android-arm@0.19.12": "@esbuild/android-arm@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/android-arm/-/android-arm-0.19.12.tgz#b0c26536f37776162ca8bde25e42040c203f2824" resolved "https://registry.yarnpkg.com/@esbuild/android-arm/-/android-arm-0.21.5.tgz#9b04384fb771926dfa6d7ad04324ecb2ab9b2e28"
integrity sha512-qg/Lj1mu3CdQlDEEiWrlC4eaPZ1KztwGJ9B6J+/6G+/4ewxJg7gqj8eVYWvao1bXrqGiW2rsBZFSX3q2lcW05w== integrity sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==
"@esbuild/android-x64@0.19.12": "@esbuild/android-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/android-x64/-/android-x64-0.19.12.tgz#cb13e2211282012194d89bf3bfe7721273473b3d" resolved "https://registry.yarnpkg.com/@esbuild/android-x64/-/android-x64-0.21.5.tgz#29918ec2db754cedcb6c1b04de8cd6547af6461e"
integrity sha512-3k7ZoUW6Q6YqhdhIaq/WZ7HwBpnFBlW905Fa4s4qWJyiNOgT1dOqDiVAQFwBH7gBRZr17gLrlFCRzF6jFh7Kew== integrity sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==
"@esbuild/darwin-arm64@0.19.12": "@esbuild/darwin-arm64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/darwin-arm64/-/darwin-arm64-0.19.12.tgz#cbee41e988020d4b516e9d9e44dd29200996275e" resolved "https://registry.yarnpkg.com/@esbuild/darwin-arm64/-/darwin-arm64-0.21.5.tgz#e495b539660e51690f3928af50a76fb0a6ccff2a"
integrity sha512-B6IeSgZgtEzGC42jsI+YYu9Z3HKRxp8ZT3cqhvliEHovq8HSX2YX8lNocDn79gCKJXOSaEot9MVYky7AKjCs8g== integrity sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==
"@esbuild/darwin-x64@0.19.12": "@esbuild/darwin-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/darwin-x64/-/darwin-x64-0.19.12.tgz#e37d9633246d52aecf491ee916ece709f9d5f4cd" resolved "https://registry.yarnpkg.com/@esbuild/darwin-x64/-/darwin-x64-0.21.5.tgz#c13838fa57372839abdddc91d71542ceea2e1e22"
integrity sha512-hKoVkKzFiToTgn+41qGhsUJXFlIjxI/jSYeZf3ugemDYZldIXIxhvwN6erJGlX4t5h417iFuheZ7l+YVn05N3A== integrity sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==
"@esbuild/freebsd-arm64@0.19.12": "@esbuild/freebsd-arm64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/freebsd-arm64/-/freebsd-arm64-0.19.12.tgz#1ee4d8b682ed363b08af74d1ea2b2b4dbba76487" resolved "https://registry.yarnpkg.com/@esbuild/freebsd-arm64/-/freebsd-arm64-0.21.5.tgz#646b989aa20bf89fd071dd5dbfad69a3542e550e"
integrity sha512-4aRvFIXmwAcDBw9AueDQ2YnGmz5L6obe5kmPT8Vd+/+x/JMVKCgdcRwH6APrbpNXsPz+K653Qg8HB/oXvXVukA== integrity sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==
"@esbuild/freebsd-x64@0.19.12": "@esbuild/freebsd-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/freebsd-x64/-/freebsd-x64-0.19.12.tgz#37a693553d42ff77cd7126764b535fb6cc28a11c" resolved "https://registry.yarnpkg.com/@esbuild/freebsd-x64/-/freebsd-x64-0.21.5.tgz#aa615cfc80af954d3458906e38ca22c18cf5c261"
integrity sha512-EYoXZ4d8xtBoVN7CEwWY2IN4ho76xjYXqSXMNccFSx2lgqOG/1TBPW0yPx1bJZk94qu3tX0fycJeeQsKovA8gg== integrity sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==
"@esbuild/linux-arm64@0.19.12": "@esbuild/linux-arm64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-arm64/-/linux-arm64-0.19.12.tgz#be9b145985ec6c57470e0e051d887b09dddb2d4b" resolved "https://registry.yarnpkg.com/@esbuild/linux-arm64/-/linux-arm64-0.21.5.tgz#70ac6fa14f5cb7e1f7f887bcffb680ad09922b5b"
integrity sha512-EoTjyYyLuVPfdPLsGVVVC8a0p1BFFvtpQDB/YLEhaXyf/5bczaGeN15QkR+O4S5LeJ92Tqotve7i1jn35qwvdA== integrity sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==
"@esbuild/linux-arm@0.19.12": "@esbuild/linux-arm@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-arm/-/linux-arm-0.19.12.tgz#207ecd982a8db95f7b5279207d0ff2331acf5eef" resolved "https://registry.yarnpkg.com/@esbuild/linux-arm/-/linux-arm-0.21.5.tgz#fc6fd11a8aca56c1f6f3894f2bea0479f8f626b9"
integrity sha512-J5jPms//KhSNv+LO1S1TX1UWp1ucM6N6XuL6ITdKWElCu8wXP72l9MM0zDTzzeikVyqFE6U8YAV9/tFyj0ti+w== integrity sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==
"@esbuild/linux-ia32@0.19.12": "@esbuild/linux-ia32@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-ia32/-/linux-ia32-0.19.12.tgz#d0d86b5ca1562523dc284a6723293a52d5860601" resolved "https://registry.yarnpkg.com/@esbuild/linux-ia32/-/linux-ia32-0.21.5.tgz#3271f53b3f93e3d093d518d1649d6d68d346ede2"
integrity sha512-Thsa42rrP1+UIGaWz47uydHSBOgTUnwBwNq59khgIwktK6x60Hivfbux9iNR0eHCHzOLjLMLfUMLCypBkZXMHA== integrity sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==
"@esbuild/linux-loong64@0.19.12": "@esbuild/linux-loong64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-loong64/-/linux-loong64-0.19.12.tgz#9a37f87fec4b8408e682b528391fa22afd952299" resolved "https://registry.yarnpkg.com/@esbuild/linux-loong64/-/linux-loong64-0.21.5.tgz#ed62e04238c57026aea831c5a130b73c0f9f26df"
integrity sha512-LiXdXA0s3IqRRjm6rV6XaWATScKAXjI4R4LoDlvO7+yQqFdlr1Bax62sRwkVvRIrwXxvtYEHHI4dm50jAXkuAA== integrity sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==
"@esbuild/linux-mips64el@0.19.12": "@esbuild/linux-mips64el@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-mips64el/-/linux-mips64el-0.19.12.tgz#4ddebd4e6eeba20b509d8e74c8e30d8ace0b89ec" resolved "https://registry.yarnpkg.com/@esbuild/linux-mips64el/-/linux-mips64el-0.21.5.tgz#e79b8eb48bf3b106fadec1ac8240fb97b4e64cbe"
integrity sha512-fEnAuj5VGTanfJ07ff0gOA6IPsvrVHLVb6Lyd1g2/ed67oU1eFzL0r9WL7ZzscD+/N6i3dWumGE1Un4f7Amf+w== integrity sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==
"@esbuild/linux-ppc64@0.19.12": "@esbuild/linux-ppc64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-ppc64/-/linux-ppc64-0.19.12.tgz#adb67dadb73656849f63cd522f5ecb351dd8dee8" resolved "https://registry.yarnpkg.com/@esbuild/linux-ppc64/-/linux-ppc64-0.21.5.tgz#5f2203860a143b9919d383ef7573521fb154c3e4"
integrity sha512-nYJA2/QPimDQOh1rKWedNOe3Gfc8PabU7HT3iXWtNUbRzXS9+vgB0Fjaqr//XNbd82mCxHzik2qotuI89cfixg== integrity sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==
"@esbuild/linux-riscv64@0.19.12": "@esbuild/linux-riscv64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-riscv64/-/linux-riscv64-0.19.12.tgz#11bc0698bf0a2abf8727f1c7ace2112612c15adf" resolved "https://registry.yarnpkg.com/@esbuild/linux-riscv64/-/linux-riscv64-0.21.5.tgz#07bcafd99322d5af62f618cb9e6a9b7f4bb825dc"
integrity sha512-2MueBrlPQCw5dVJJpQdUYgeqIzDQgw3QtiAHUC4RBz9FXPrskyyU3VI1hw7C0BSKB9OduwSJ79FTCqtGMWqJHg== integrity sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==
"@esbuild/linux-s390x@0.19.12": "@esbuild/linux-s390x@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-s390x/-/linux-s390x-0.19.12.tgz#e86fb8ffba7c5c92ba91fc3b27ed5a70196c3cc8" resolved "https://registry.yarnpkg.com/@esbuild/linux-s390x/-/linux-s390x-0.21.5.tgz#b7ccf686751d6a3e44b8627ababc8be3ef62d8de"
integrity sha512-+Pil1Nv3Umes4m3AZKqA2anfhJiVmNCYkPchwFJNEJN5QxmTs1uzyy4TvmDrCRNT2ApwSari7ZIgrPeUx4UZDg== integrity sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==
"@esbuild/linux-x64@0.19.12": "@esbuild/linux-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/linux-x64/-/linux-x64-0.19.12.tgz#5f37cfdc705aea687dfe5dfbec086a05acfe9c78" resolved "https://registry.yarnpkg.com/@esbuild/linux-x64/-/linux-x64-0.21.5.tgz#6d8f0c768e070e64309af8004bb94e68ab2bb3b0"
integrity sha512-B71g1QpxfwBvNrfyJdVDexenDIt1CiDN1TIXLbhOw0KhJzE78KIFGX6OJ9MrtC0oOqMWf+0xop4qEU8JrJTwCg== integrity sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==
"@esbuild/netbsd-x64@0.19.12": "@esbuild/netbsd-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/netbsd-x64/-/netbsd-x64-0.19.12.tgz#29da566a75324e0d0dd7e47519ba2f7ef168657b" resolved "https://registry.yarnpkg.com/@esbuild/netbsd-x64/-/netbsd-x64-0.21.5.tgz#bbe430f60d378ecb88decb219c602667387a6047"
integrity sha512-3ltjQ7n1owJgFbuC61Oj++XhtzmymoCihNFgT84UAmJnxJfm4sYCiSLTXZtE00VWYpPMYc+ZQmB6xbSdVh0JWA== integrity sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==
"@esbuild/openbsd-x64@0.19.12": "@esbuild/openbsd-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/openbsd-x64/-/openbsd-x64-0.19.12.tgz#306c0acbdb5a99c95be98bdd1d47c916e7dc3ff0" resolved "https://registry.yarnpkg.com/@esbuild/openbsd-x64/-/openbsd-x64-0.21.5.tgz#99d1cf2937279560d2104821f5ccce220cb2af70"
integrity sha512-RbrfTB9SWsr0kWmb9srfF+L933uMDdu9BIzdA7os2t0TXhCRjrQyCeOt6wVxr79CKD4c+p+YhCj31HBkYcXebw== integrity sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==
"@esbuild/sunos-x64@0.19.12": "@esbuild/sunos-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/sunos-x64/-/sunos-x64-0.19.12.tgz#0933eaab9af8b9b2c930236f62aae3fc593faf30" resolved "https://registry.yarnpkg.com/@esbuild/sunos-x64/-/sunos-x64-0.21.5.tgz#08741512c10d529566baba837b4fe052c8f3487b"
integrity sha512-HKjJwRrW8uWtCQnQOz9qcU3mUZhTUQvi56Q8DPTLLB+DawoiQdjsYq+j+D3s9I8VFtDr+F9CjgXKKC4ss89IeA== integrity sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==
"@esbuild/win32-arm64@0.19.12": "@esbuild/win32-arm64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/win32-arm64/-/win32-arm64-0.19.12.tgz#773bdbaa1971b36db2f6560088639ccd1e6773ae" resolved "https://registry.yarnpkg.com/@esbuild/win32-arm64/-/win32-arm64-0.21.5.tgz#675b7385398411240735016144ab2e99a60fc75d"
integrity sha512-URgtR1dJnmGvX864pn1B2YUYNzjmXkuJOIqG2HdU62MVS4EHpU2946OZoTMnRUHklGtJdJZ33QfzdjGACXhn1A== integrity sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==
"@esbuild/win32-ia32@0.19.12": "@esbuild/win32-ia32@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/win32-ia32/-/win32-ia32-0.19.12.tgz#000516cad06354cc84a73f0943a4aa690ef6fd67" resolved "https://registry.yarnpkg.com/@esbuild/win32-ia32/-/win32-ia32-0.21.5.tgz#1bfc3ce98aa6ca9a0969e4d2af72144c59c1193b"
integrity sha512-+ZOE6pUkMOJfmxmBZElNOx72NKpIa/HFOMGzu8fqzQJ5kgf6aTGrcJaFsNiVMH4JKpMipyK+7k0n2UXN7a8YKQ== integrity sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==
"@esbuild/win32-x64@0.19.12": "@esbuild/win32-x64@0.21.5":
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/@esbuild/win32-x64/-/win32-x64-0.19.12.tgz#c57c8afbb4054a3ab8317591a0b7320360b444ae" resolved "https://registry.yarnpkg.com/@esbuild/win32-x64/-/win32-x64-0.21.5.tgz#acad351d582d157bb145535db2a6ff53dd514b5c"
integrity sha512-T1QyPSDCyMXaO3pzBkF96E8xMkiRYbUEZADd29SyPGabqxMViNoii+NcK7eWJAEoU6RZyEm5lVSIjTmcdoB9HA== integrity sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==
"@eslint-community/eslint-utils@^4.2.0", "@eslint-community/eslint-utils@^4.4.0": "@eslint-community/eslint-utils@^4.2.0", "@eslint-community/eslint-utils@^4.4.0":
version "4.4.0" version "4.4.0"
@ -1626,70 +1626,115 @@
estree-walker "^2.0.2" estree-walker "^2.0.2"
picomatch "^2.3.1" picomatch "^2.3.1"
"@rollup/rollup-android-arm-eabi@4.12.0": "@rollup/rollup-android-arm-eabi@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.12.0.tgz#38c3abd1955a3c21d492af6b1a1dca4bb1d894d6" resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.52.5.tgz#0f44a2f8668ed87b040b6fe659358ac9239da4db"
integrity sha512-+ac02NL/2TCKRrJu2wffk1kZ+RyqxVUlbjSagNgPm94frxtr+XDL12E5Ll1enWskLrtrZ2r8L3wED1orIibV/w== integrity sha512-8c1vW4ocv3UOMp9K+gToY5zL2XiiVw3k7f1ksf4yO1FlDFQ1C2u72iACFnSOceJFsWskc2WZNqeRhFRPzv+wtQ==
"@rollup/rollup-android-arm64@4.12.0": "@rollup/rollup-android-arm64@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.12.0.tgz#3822e929f415627609e53b11cec9a4be806de0e2" resolved "https://registry.yarnpkg.com/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.52.5.tgz#25b9a01deef6518a948431564c987bcb205274f5"
integrity sha512-OBqcX2BMe6nvjQ0Nyp7cC90cnumt8PXmO7Dp3gfAju/6YwG0Tj74z1vKrfRz7qAv23nBcYM8BCbhrsWqO7PzQQ== integrity sha512-mQGfsIEFcu21mvqkEKKu2dYmtuSZOBMmAl5CFlPGLY94Vlcm+zWApK7F/eocsNzp8tKmbeBP8yXyAbx0XHsFNA==
"@rollup/rollup-darwin-arm64@4.12.0": "@rollup/rollup-darwin-arm64@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.12.0.tgz#6c082de71f481f57df6cfa3701ab2a7afde96f69" resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.52.5.tgz#8a102869c88f3780c7d5e6776afd3f19084ecd7f"
integrity sha512-X64tZd8dRE/QTrBIEs63kaOBG0b5GVEd3ccoLtyf6IdXtHdh8h+I56C2yC3PtC9Ucnv0CpNFJLqKFVgCYe0lOQ== integrity sha512-takF3CR71mCAGA+v794QUZ0b6ZSrgJkArC+gUiG6LB6TQty9T0Mqh3m2ImRBOxS2IeYBo4lKWIieSvnEk2OQWA==
"@rollup/rollup-darwin-x64@4.12.0": "@rollup/rollup-darwin-x64@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.12.0.tgz#c34ca0d31f3c46a22c9afa0e944403eea0edcfd8" resolved "https://registry.yarnpkg.com/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.52.5.tgz#8e526417cd6f54daf1d0c04cf361160216581956"
integrity sha512-cc71KUZoVbUJmGP2cOuiZ9HSOP14AzBAThn3OU+9LcA1+IUqswJyR1cAJj3Mg55HbjZP6OLAIscbQsQLrpgTOg== integrity sha512-W901Pla8Ya95WpxDn//VF9K9u2JbocwV/v75TE0YIHNTbhqUTv9w4VuQ9MaWlNOkkEfFwkdNhXgcLqPSmHy0fA==
"@rollup/rollup-linux-arm-gnueabihf@4.12.0": "@rollup/rollup-freebsd-arm64@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.12.0.tgz#48e899c1e438629c072889b824a98787a7c2362d" resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.52.5.tgz#0e7027054493f3409b1f219a3eac5efd128ef899"
integrity sha512-a6w/Y3hyyO6GlpKL2xJ4IOh/7d+APaqLYdMf86xnczU3nurFTaVN9s9jOXQg97BE4nYm/7Ga51rjec5nfRdrvA== integrity sha512-QofO7i7JycsYOWxe0GFqhLmF6l1TqBswJMvICnRUjqCx8b47MTo46W8AoeQwiokAx3zVryVnxtBMcGcnX12LvA==
"@rollup/rollup-linux-arm64-gnu@4.12.0": "@rollup/rollup-freebsd-x64@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.12.0.tgz#788c2698a119dc229062d40da6ada8a090a73a68" resolved "https://registry.yarnpkg.com/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.52.5.tgz#72b204a920139e9ec3d331bd9cfd9a0c248ccb10"
integrity sha512-0fZBq27b+D7Ar5CQMofVN8sggOVhEtzFUwOwPppQt0k+VR+7UHMZZY4y+64WJ06XOhBTKXtQB/Sv0NwQMXyNAA== integrity sha512-jr21b/99ew8ujZubPo9skbrItHEIE50WdV86cdSoRkKtmWa+DDr6fu2c/xyRT0F/WazZpam6kk7IHBerSL7LDQ==
"@rollup/rollup-linux-arm64-musl@4.12.0": "@rollup/rollup-linux-arm-gnueabihf@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.12.0.tgz#3882a4e3a564af9e55804beeb67076857b035ab7" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.52.5.tgz#ab1b522ebe5b7e06c99504cc38f6cd8b808ba41c"
integrity sha512-eTvzUS3hhhlgeAv6bfigekzWZjaEX9xP9HhxB0Dvrdbkk5w/b+1Sxct2ZuDxNJKzsRStSq1EaEkVSEe7A7ipgQ== integrity sha512-PsNAbcyv9CcecAUagQefwX8fQn9LQ4nZkpDboBOttmyffnInRy8R8dSg6hxxl2Re5QhHBf6FYIDhIj5v982ATQ==
"@rollup/rollup-linux-riscv64-gnu@4.12.0": "@rollup/rollup-linux-arm-musleabihf@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.12.0.tgz#0c6ad792e1195c12bfae634425a3d2aa0fe93ab7" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.52.5.tgz#f8cc30b638f1ee7e3d18eac24af47ea29d9beb00"
integrity sha512-ix+qAB9qmrCRiaO71VFfY8rkiAZJL8zQRXveS27HS+pKdjwUfEhqo2+YF2oI+H/22Xsiski+qqwIBxVewLK7sw== integrity sha512-Fw4tysRutyQc/wwkmcyoqFtJhh0u31K+Q6jYjeicsGJJ7bbEq8LwPWV/w0cnzOqR2m694/Af6hpFayLJZkG2VQ==
"@rollup/rollup-linux-x64-gnu@4.12.0": "@rollup/rollup-linux-arm64-gnu@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.12.0.tgz#9d62485ea0f18d8674033b57aa14fb758f6ec6e3" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.52.5.tgz#7af37a9e85f25db59dc8214172907b7e146c12cc"
integrity sha512-TenQhZVOtw/3qKOPa7d+QgkeM6xY0LtwzR8OplmyL5LrgTWIXpTQg2Q2ycBf8jm+SFW2Wt/DTn1gf7nFp3ssVA== integrity sha512-a+3wVnAYdQClOTlyapKmyI6BLPAFYs0JM8HRpgYZQO02rMR09ZcV9LbQB+NL6sljzG38869YqThrRnfPMCDtZg==
"@rollup/rollup-linux-x64-musl@4.12.0": "@rollup/rollup-linux-arm64-musl@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.12.0.tgz#50e8167e28b33c977c1f813def2b2074d1435e05" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.52.5.tgz#a623eb0d3617c03b7a73716eb85c6e37b776f7e0"
integrity sha512-LfFdRhNnW0zdMvdCb5FNuWlls2WbbSridJvxOvYWgSBOYZtgBfW9UGNJG//rwMqTX1xQE9BAodvMH9tAusKDUw== integrity sha512-AvttBOMwO9Pcuuf7m9PkC1PUIKsfaAJ4AYhy944qeTJgQOqJYJ9oVl2nYgY7Rk0mkbsuOpCAYSs6wLYB2Xiw0Q==
"@rollup/rollup-win32-arm64-msvc@4.12.0": "@rollup/rollup-linux-loong64-gnu@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.12.0.tgz#68d233272a2004429124494121a42c4aebdc5b8e" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.52.5.tgz#76ea038b549c5c6c5f0d062942627c4066642ee2"
integrity sha512-JPDxovheWNp6d7AHCgsUlkuCKvtu3RB55iNEkaQcf0ttsDU/JZF+iQnYcQJSk/7PtT4mjjVG8N1kpwnI9SLYaw== integrity sha512-DkDk8pmXQV2wVrF6oq5tONK6UHLz/XcEVow4JTTerdeV1uqPeHxwcg7aFsfnSm9L+OO8WJsWotKM2JJPMWrQtA==
"@rollup/rollup-win32-ia32-msvc@4.12.0": "@rollup/rollup-linux-ppc64-gnu@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.12.0.tgz#366ca62221d1689e3b55a03f4ae12ae9ba595d40" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.52.5.tgz#d9a4c3f0a3492bc78f6fdfe8131ac61c7359ccd5"
integrity sha512-fjtuvMWRGJn1oZacG8IPnzIV6GF2/XG+h71FKn76OYFqySXInJtseAqdprVTDTyqPxQOG9Exak5/E9Z3+EJ8ZA== integrity sha512-W/b9ZN/U9+hPQVvlGwjzi+Wy4xdoH2I8EjaCkMvzpI7wJUs8sWJ03Rq96jRnHkSrcHTpQe8h5Tg3ZzUPGauvAw==
"@rollup/rollup-win32-x64-msvc@4.12.0": "@rollup/rollup-linux-riscv64-gnu@4.52.5":
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.12.0.tgz#9ffdf9ed133a7464f4ae187eb9e1294413fab235" resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.52.5.tgz#87ab033eebd1a9a1dd7b60509f6333ec1f82d994"
integrity sha512-ZYmr5mS2wd4Dew/JjT0Fqi2NPB/ZhZ2VvPp7SmvPZb4Y1CG/LRcS6tcRo2cYU7zLK5A7cdbhWnnWmUjoI4qapg== integrity sha512-sjQLr9BW7R/ZiXnQiWPkErNfLMkkWIoCz7YMn27HldKsADEKa5WYdobaa1hmN6slu9oWQbB6/jFpJ+P2IkVrmw==
"@rollup/rollup-linux-riscv64-musl@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.52.5.tgz#bda3eb67e1c993c1ba12bc9c2f694e7703958d9f"
integrity sha512-hq3jU/kGyjXWTvAh2awn8oHroCbrPm8JqM7RUpKjalIRWWXE01CQOf/tUNWNHjmbMHg/hmNCwc/Pz3k1T/j/Lg==
"@rollup/rollup-linux-s390x-gnu@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.52.5.tgz#f7bc10fbe096ab44694233dc42a2291ed5453d4b"
integrity sha512-gn8kHOrku8D4NGHMK1Y7NA7INQTRdVOntt1OCYypZPRt6skGbddska44K8iocdpxHTMMNui5oH4elPH4QOLrFQ==
"@rollup/rollup-linux-x64-gnu@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.52.5.tgz#a151cb1234cc9b2cf5e8cfc02aa91436b8f9e278"
integrity sha512-hXGLYpdhiNElzN770+H2nlx+jRog8TyynpTVzdlc6bndktjKWyZyiCsuDAlpd+j+W+WNqfcyAWz9HxxIGfZm1Q==
"@rollup/rollup-linux-x64-musl@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.52.5.tgz#7859e196501cc3b3062d45d2776cfb4d2f3a9350"
integrity sha512-arCGIcuNKjBoKAXD+y7XomR9gY6Mw7HnFBv5Rw7wQRvwYLR7gBAgV7Mb2QTyjXfTveBNFAtPt46/36vV9STLNg==
"@rollup/rollup-openharmony-arm64@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.52.5.tgz#85d0df7233734df31e547c1e647d2a5300b3bf30"
integrity sha512-QoFqB6+/9Rly/RiPjaomPLmR/13cgkIGfA40LHly9zcH1S0bN2HVFYk3a1eAyHQyjs3ZJYlXvIGtcCs5tko9Cw==
"@rollup/rollup-win32-arm64-msvc@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.52.5.tgz#e62357d00458db17277b88adbf690bb855cac937"
integrity sha512-w0cDWVR6MlTstla1cIfOGyl8+qb93FlAVutcor14Gf5Md5ap5ySfQ7R9S/NjNaMLSFdUnKGEasmVnu3lCMqB7w==
"@rollup/rollup-win32-ia32-msvc@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.52.5.tgz#fc7cd40f44834a703c1f1c3fe8bcc27ce476cd50"
integrity sha512-Aufdpzp7DpOTULJCuvzqcItSGDH73pF3ko/f+ckJhxQyHtp67rHw3HMNxoIdDMUITJESNE6a8uh4Lo4SLouOUg==
"@rollup/rollup-win32-x64-gnu@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.52.5.tgz#1a22acfc93c64a64a48c42672e857ee51774d0d3"
integrity sha512-UGBUGPFp1vkj6p8wCRraqNhqwX/4kNQPS57BCFc8wYh0g94iVIW33wJtQAx3G7vrjjNtRaxiMUylM0ktp/TRSQ==
"@rollup/rollup-win32-x64-msvc@4.52.5":
version "4.52.5"
resolved "https://registry.yarnpkg.com/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.52.5.tgz#1657f56326bbe0ac80eedc9f9c18fc1ddd24e107"
integrity sha512-TAcgQh2sSkykPRWLrdyy2AiceMckNf5loITqXxFI5VuQjS5tSuw3WlwdN8qv8vzjLAUTvYaH/mVjSFpbkFbpTg==
"@rushstack/eslint-patch@^1.1.0": "@rushstack/eslint-patch@^1.1.0":
version "1.6.0" version "1.6.0"
@ -1863,7 +1908,12 @@
resolved "https://registry.yarnpkg.com/@swc/types/-/types-0.1.5.tgz#043b731d4f56a79b4897a3de1af35e75d56bc63a" resolved "https://registry.yarnpkg.com/@swc/types/-/types-0.1.5.tgz#043b731d4f56a79b4897a3de1af35e75d56bc63a"
integrity sha512-myfUej5naTBWnqOCc/MdVOLVjXUXtIA+NpDrDBKJtLLg2shUjBu3cZmB/85RyitKc55+lUUyl7oRfLOvkr2hsw== integrity sha512-myfUej5naTBWnqOCc/MdVOLVjXUXtIA+NpDrDBKJtLLg2shUjBu3cZmB/85RyitKc55+lUUyl7oRfLOvkr2hsw==
"@types/estree@1.0.5", "@types/estree@^1.0.0": "@types/estree@1.0.8":
version "1.0.8"
resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.8.tgz#958b91c991b1867ced318bedea0e215ee050726e"
integrity sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==
"@types/estree@^1.0.0":
version "1.0.5" version "1.0.5"
resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.5.tgz#a6ce3e556e00fd9895dd872dd172ad0d4bd687f4" resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.5.tgz#a6ce3e556e00fd9895dd872dd172ad0d4bd687f4"
integrity sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw== integrity sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==
@ -2074,44 +2124,44 @@
dependencies: dependencies:
"@swc/core" "^1.3.107" "@swc/core" "^1.3.107"
"@vitest/expect@1.3.1": "@vitest/expect@1.6.1":
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/@vitest/expect/-/expect-1.3.1.tgz#d4c14b89c43a25fd400a6b941f51ba27fe0cb918" resolved "https://registry.yarnpkg.com/@vitest/expect/-/expect-1.6.1.tgz#b90c213f587514a99ac0bf84f88cff9042b0f14d"
integrity sha512-xofQFwIzfdmLLlHa6ag0dPV8YsnKOCP1KdAeVVh34vSjN2dcUiXYCD9htu/9eM7t8Xln4v03U9HLxLpPlsXdZw== integrity sha512-jXL+9+ZNIJKruofqXuuTClf44eSpcHlgj3CiuNihUF3Ioujtmc0zIa3UJOW5RjDK1YLBJZnWBlPuqhYycLioog==
dependencies: dependencies:
"@vitest/spy" "1.3.1" "@vitest/spy" "1.6.1"
"@vitest/utils" "1.3.1" "@vitest/utils" "1.6.1"
chai "^4.3.10" chai "^4.3.10"
"@vitest/runner@1.3.1": "@vitest/runner@1.6.1":
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/@vitest/runner/-/runner-1.3.1.tgz#e7f96cdf74842934782bfd310eef4b8695bbfa30" resolved "https://registry.yarnpkg.com/@vitest/runner/-/runner-1.6.1.tgz#10f5857c3e376218d58c2bfacfea1161e27e117f"
integrity sha512-5FzF9c3jG/z5bgCnjr8j9LNq/9OxV2uEBAITOXfoe3rdZJTdO7jzThth7FXv/6b+kdY65tpRQB7WaKhNZwX+Kg== integrity sha512-3nSnYXkVkf3mXFfE7vVyPmi3Sazhb/2cfZGGs0JRzFsPFvAMBEcrweV1V1GsrstdXeKCTXlJbvnQwGWgEIHmOA==
dependencies: dependencies:
"@vitest/utils" "1.3.1" "@vitest/utils" "1.6.1"
p-limit "^5.0.0" p-limit "^5.0.0"
pathe "^1.1.1" pathe "^1.1.1"
"@vitest/snapshot@1.3.1": "@vitest/snapshot@1.6.1":
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/@vitest/snapshot/-/snapshot-1.3.1.tgz#193a5d7febf6ec5d22b3f8c5a093f9e4322e7a88" resolved "https://registry.yarnpkg.com/@vitest/snapshot/-/snapshot-1.6.1.tgz#90414451a634bb36cd539ccb29ae0d048a8c0479"
integrity sha512-EF++BZbt6RZmOlE3SuTPu/NfwBF6q4ABS37HHXzs2LUVPBLx2QoY/K0fKpRChSo8eLiuxcbCVfqKgx/dplCDuQ== integrity sha512-WvidQuWAzU2p95u8GAKlRMqMyN1yOJkGHnx3M1PL9Raf7AQ1kwLKg04ADlCa3+OXUZE7BceOhVZiuWAbzCKcUQ==
dependencies: dependencies:
magic-string "^0.30.5" magic-string "^0.30.5"
pathe "^1.1.1" pathe "^1.1.1"
pretty-format "^29.7.0" pretty-format "^29.7.0"
"@vitest/spy@1.3.1": "@vitest/spy@1.6.1":
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/@vitest/spy/-/spy-1.3.1.tgz#814245d46d011b99edd1c7528f5725c64e85a88b" resolved "https://registry.yarnpkg.com/@vitest/spy/-/spy-1.6.1.tgz#33376be38a5ed1ecd829eb986edaecc3e798c95d"
integrity sha512-xAcW+S099ylC9VLU7eZfdT9myV67Nor9w9zhf0mGCYJSO+zM2839tOeROTdikOi/8Qeusffvxb/MyBSOja1Uig== integrity sha512-MGcMmpGkZebsMZhbQKkAf9CX5zGvjkBTqf8Zx3ApYWXr3wG+QvEu2eXWfnIIWYSJExIp4V9FCKDEeygzkYrXMw==
dependencies: dependencies:
tinyspy "^2.2.0" tinyspy "^2.2.0"
"@vitest/utils@1.3.1": "@vitest/utils@1.6.1":
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/@vitest/utils/-/utils-1.3.1.tgz#7b05838654557544f694a372de767fcc9594d61a" resolved "https://registry.yarnpkg.com/@vitest/utils/-/utils-1.6.1.tgz#6d2f36cb6d866f2bbf59da854a324d6bf8040f17"
integrity sha512-d3Waie/299qqRyHTm2DjADeTaNdNSVsnwHPWrs20JMpjh6eiVq7ggggweO8rc4arhf6rRkWuHKwvxGvejUXZZQ== integrity sha512-jOrrUvXM4Av9ZWiG1EajNto0u96kWAhJ1LmPmJhXXQx/32MecEKd10pOLYgS2BQx1TgkGhloPU1ArDW2vvaY6g==
dependencies: dependencies:
diff-sequences "^29.6.3" diff-sequences "^29.6.3"
estree-walker "^3.0.3" estree-walker "^3.0.3"
@ -2427,11 +2477,11 @@ brace-expansion@^2.0.1:
balanced-match "^1.0.0" balanced-match "^1.0.0"
braces@^3.0.2, braces@~3.0.2: braces@^3.0.2, braces@~3.0.2:
version "3.0.2" version "3.0.3"
resolved "https://registry.yarnpkg.com/braces/-/braces-3.0.2.tgz#3454e1a462ee8d599e236df336cd9ea4f8afe107" resolved "https://registry.yarnpkg.com/braces/-/braces-3.0.3.tgz#490332f40919452272d55a8480adc0c441358789"
integrity sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A== integrity sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==
dependencies: dependencies:
fill-range "^7.0.1" fill-range "^7.1.1"
browserslist@^4.21.10, browserslist@^4.21.9, browserslist@^4.22.1: browserslist@^4.21.10, browserslist@^4.21.9, browserslist@^4.22.1:
version "4.22.1" version "4.22.1"
@ -2627,9 +2677,9 @@ cosmiconfig@^8.1.3:
path-type "^4.0.0" path-type "^4.0.0"
cross-spawn@^7.0.2, cross-spawn@^7.0.3: cross-spawn@^7.0.2, cross-spawn@^7.0.3:
version "7.0.3" version "7.0.6"
resolved "https://registry.yarnpkg.com/cross-spawn/-/cross-spawn-7.0.3.tgz#f73a85b9d5d41d045551c177e2882d4ac85728a6" resolved "https://registry.yarnpkg.com/cross-spawn/-/cross-spawn-7.0.6.tgz#8a58fe78f00dcd70c370451759dfbfaf03e8ee9f"
integrity sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w== integrity sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==
dependencies: dependencies:
path-key "^3.1.0" path-key "^3.1.0"
shebang-command "^2.0.0" shebang-command "^2.0.0"
@ -2921,34 +2971,34 @@ es-to-primitive@^1.2.1:
is-date-object "^1.0.1" is-date-object "^1.0.1"
is-symbol "^1.0.2" is-symbol "^1.0.2"
esbuild@^0.19.3: esbuild@^0.21.3:
version "0.19.12" version "0.21.5"
resolved "https://registry.yarnpkg.com/esbuild/-/esbuild-0.19.12.tgz#dc82ee5dc79e82f5a5c3b4323a2a641827db3e04" resolved "https://registry.yarnpkg.com/esbuild/-/esbuild-0.21.5.tgz#9ca301b120922959b766360d8ac830da0d02997d"
integrity sha512-aARqgq8roFBj054KvQr5f1sFu0D65G+miZRCuJyJ0G13Zwx7vRar5Zhn2tkQNzIXcBrNVsv/8stehpj+GAjgbg== integrity sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==
optionalDependencies: optionalDependencies:
"@esbuild/aix-ppc64" "0.19.12" "@esbuild/aix-ppc64" "0.21.5"
"@esbuild/android-arm" "0.19.12" "@esbuild/android-arm" "0.21.5"
"@esbuild/android-arm64" "0.19.12" "@esbuild/android-arm64" "0.21.5"
"@esbuild/android-x64" "0.19.12" "@esbuild/android-x64" "0.21.5"
"@esbuild/darwin-arm64" "0.19.12" "@esbuild/darwin-arm64" "0.21.5"
"@esbuild/darwin-x64" "0.19.12" "@esbuild/darwin-x64" "0.21.5"
"@esbuild/freebsd-arm64" "0.19.12" "@esbuild/freebsd-arm64" "0.21.5"
"@esbuild/freebsd-x64" "0.19.12" "@esbuild/freebsd-x64" "0.21.5"
"@esbuild/linux-arm" "0.19.12" "@esbuild/linux-arm" "0.21.5"
"@esbuild/linux-arm64" "0.19.12" "@esbuild/linux-arm64" "0.21.5"
"@esbuild/linux-ia32" "0.19.12" "@esbuild/linux-ia32" "0.21.5"
"@esbuild/linux-loong64" "0.19.12" "@esbuild/linux-loong64" "0.21.5"
"@esbuild/linux-mips64el" "0.19.12" "@esbuild/linux-mips64el" "0.21.5"
"@esbuild/linux-ppc64" "0.19.12" "@esbuild/linux-ppc64" "0.21.5"
"@esbuild/linux-riscv64" "0.19.12" "@esbuild/linux-riscv64" "0.21.5"
"@esbuild/linux-s390x" "0.19.12" "@esbuild/linux-s390x" "0.21.5"
"@esbuild/linux-x64" "0.19.12" "@esbuild/linux-x64" "0.21.5"
"@esbuild/netbsd-x64" "0.19.12" "@esbuild/netbsd-x64" "0.21.5"
"@esbuild/openbsd-x64" "0.19.12" "@esbuild/openbsd-x64" "0.21.5"
"@esbuild/sunos-x64" "0.19.12" "@esbuild/sunos-x64" "0.21.5"
"@esbuild/win32-arm64" "0.19.12" "@esbuild/win32-arm64" "0.21.5"
"@esbuild/win32-ia32" "0.19.12" "@esbuild/win32-ia32" "0.21.5"
"@esbuild/win32-x64" "0.19.12" "@esbuild/win32-x64" "0.21.5"
escalade@^3.1.1: escalade@^3.1.1:
version "3.1.1" version "3.1.1"
@ -3275,10 +3325,10 @@ file-entry-cache@^6.0.1:
dependencies: dependencies:
flat-cache "^3.0.4" flat-cache "^3.0.4"
fill-range@^7.0.1: fill-range@^7.1.1:
version "7.0.1" version "7.1.1"
resolved "https://registry.yarnpkg.com/fill-range/-/fill-range-7.0.1.tgz#1919a6a7c75fe38b2c7c77e5198535da9acdda40" resolved "https://registry.yarnpkg.com/fill-range/-/fill-range-7.1.1.tgz#44265d3cac07e3ea7dc247516380643754a05292"
integrity sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ== integrity sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==
dependencies: dependencies:
to-regex-range "^5.0.1" to-regex-range "^5.0.1"
@ -3885,9 +3935,9 @@ js-tokens@^8.0.2:
integrity sha512-UfJMcSJc+SEXEl9lH/VLHSZbThQyLpw1vLO1Lb+j4RWDvG3N2f7yj3PVQA3cmkTBNldJ9eFnM+xEXxHIXrYiJw== integrity sha512-UfJMcSJc+SEXEl9lH/VLHSZbThQyLpw1vLO1Lb+j4RWDvG3N2f7yj3PVQA3cmkTBNldJ9eFnM+xEXxHIXrYiJw==
js-yaml@^4.1.0: js-yaml@^4.1.0:
version "4.1.0" version "4.1.1"
resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-4.1.0.tgz#c1fb65f8f5017901cdd2c951864ba18458a10602" resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-4.1.1.tgz#854c292467705b699476e1a2decc0c8a3458806b"
integrity sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA== integrity sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==
dependencies: dependencies:
argparse "^2.0.1" argparse "^2.0.1"
@ -4172,10 +4222,10 @@ mz@^2.7.0:
object-assign "^4.0.1" object-assign "^4.0.1"
thenify-all "^1.0.0" thenify-all "^1.0.0"
nanoid@^3.3.7: nanoid@^3.3.11:
version "3.3.7" version "3.3.11"
resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.7.tgz#d0c301a691bc8d54efa0a2226ccf3fe2fd656bd8" resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.11.tgz#4f4f112cefbe303202f2199838128936266d185b"
integrity sha512-eSRppjcPIatRIMC1U6UngP8XFcz8MQWGQdt1MTBQ7NaAmvXDfvNxbvWV3x2y6CdEUciCSsDHDQZbhYaB8QEo2g== integrity sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==
natural-compare@^1.4.0: natural-compare@^1.4.0:
version "1.4.0" version "1.4.0"
@ -4403,10 +4453,10 @@ pathval@^1.1.1:
resolved "https://registry.yarnpkg.com/pathval/-/pathval-1.1.1.tgz#8534e77a77ce7ac5a2512ea21e0fdb8fcf6c3d8d" resolved "https://registry.yarnpkg.com/pathval/-/pathval-1.1.1.tgz#8534e77a77ce7ac5a2512ea21e0fdb8fcf6c3d8d"
integrity sha512-Dp6zGqpTdETdR63lehJYPeIOqpiNBNtc7BpWSLrOje7UaIsE5aY92r/AunQA7rsXvet3lrJ3JnZX29UPTKXyKQ== integrity sha512-Dp6zGqpTdETdR63lehJYPeIOqpiNBNtc7BpWSLrOje7UaIsE5aY92r/AunQA7rsXvet3lrJ3JnZX29UPTKXyKQ==
picocolors@^1.0.0: picocolors@^1.0.0, picocolors@^1.1.1:
version "1.0.0" version "1.1.1"
resolved "https://registry.yarnpkg.com/picocolors/-/picocolors-1.0.0.tgz#cb5bdc74ff3f51892236eaf79d68bc44564ab81c" resolved "https://registry.yarnpkg.com/picocolors/-/picocolors-1.1.1.tgz#3d321af3eab939b083c8f929a1d12cda81c26b6b"
integrity sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ== integrity sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==
picomatch@^2.0.4, picomatch@^2.2.1, picomatch@^2.3.1: picomatch@^2.0.4, picomatch@^2.2.1, picomatch@^2.3.1:
version "2.3.1" version "2.3.1"
@ -4476,14 +4526,14 @@ postcss-value-parser@^4.0.0, postcss-value-parser@^4.2.0:
resolved "https://registry.yarnpkg.com/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz#723c09920836ba6d3e5af019f92bc0971c02e514" resolved "https://registry.yarnpkg.com/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz#723c09920836ba6d3e5af019f92bc0971c02e514"
integrity sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ== integrity sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ==
postcss@^8.4.23, postcss@^8.4.31, postcss@^8.4.35: postcss@^8.4.23, postcss@^8.4.31, postcss@^8.4.43:
version "8.4.35" version "8.5.6"
resolved "https://registry.yarnpkg.com/postcss/-/postcss-8.4.35.tgz#60997775689ce09011edf083a549cea44aabe2f7" resolved "https://registry.yarnpkg.com/postcss/-/postcss-8.5.6.tgz#2825006615a619b4f62a9e7426cc120b349a8f3c"
integrity sha512-u5U8qYpBCpN13BsiEB0CbR1Hhh4Gc0zLFuedrHJKMctHCHAGrMdG0PRM/KErzAL3CU6/eckEtmHNB3x6e3c0vA== integrity sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==
dependencies: dependencies:
nanoid "^3.3.7" nanoid "^3.3.11"
picocolors "^1.0.0" picocolors "^1.1.1"
source-map-js "^1.0.2" source-map-js "^1.2.1"
prelude-ls@^1.2.1: prelude-ls@^1.2.1:
version "1.2.1" version "1.2.1"
@ -4715,26 +4765,35 @@ rimraf@^3.0.2:
dependencies: dependencies:
glob "^7.1.3" glob "^7.1.3"
rollup@^4.2.0: rollup@^4.20.0:
version "4.12.0" version "4.52.5"
resolved "https://registry.yarnpkg.com/rollup/-/rollup-4.12.0.tgz#0b6d1e5f3d46bbcf244deec41a7421dc54cc45b5" resolved "https://registry.yarnpkg.com/rollup/-/rollup-4.52.5.tgz#96982cdcaedcdd51b12359981f240f94304ec235"
integrity sha512-wz66wn4t1OHIJw3+XU7mJJQV/2NAfw5OAk6G6Hoo3zcvz/XOfQ52Vgi+AN4Uxoxi0KBBwk2g8zPrTDA4btSB/Q== integrity sha512-3GuObel8h7Kqdjt0gxkEzaifHTqLVW56Y/bjN7PSQtkKr0w3V/QYSdt6QWYtd7A1xUtYQigtdUfgj1RvWVtorw==
dependencies: dependencies:
"@types/estree" "1.0.5" "@types/estree" "1.0.8"
optionalDependencies: optionalDependencies:
"@rollup/rollup-android-arm-eabi" "4.12.0" "@rollup/rollup-android-arm-eabi" "4.52.5"
"@rollup/rollup-android-arm64" "4.12.0" "@rollup/rollup-android-arm64" "4.52.5"
"@rollup/rollup-darwin-arm64" "4.12.0" "@rollup/rollup-darwin-arm64" "4.52.5"
"@rollup/rollup-darwin-x64" "4.12.0" "@rollup/rollup-darwin-x64" "4.52.5"
"@rollup/rollup-linux-arm-gnueabihf" "4.12.0" "@rollup/rollup-freebsd-arm64" "4.52.5"
"@rollup/rollup-linux-arm64-gnu" "4.12.0" "@rollup/rollup-freebsd-x64" "4.52.5"
"@rollup/rollup-linux-arm64-musl" "4.12.0" "@rollup/rollup-linux-arm-gnueabihf" "4.52.5"
"@rollup/rollup-linux-riscv64-gnu" "4.12.0" "@rollup/rollup-linux-arm-musleabihf" "4.52.5"
"@rollup/rollup-linux-x64-gnu" "4.12.0" "@rollup/rollup-linux-arm64-gnu" "4.52.5"
"@rollup/rollup-linux-x64-musl" "4.12.0" "@rollup/rollup-linux-arm64-musl" "4.52.5"
"@rollup/rollup-win32-arm64-msvc" "4.12.0" "@rollup/rollup-linux-loong64-gnu" "4.52.5"
"@rollup/rollup-win32-ia32-msvc" "4.12.0" "@rollup/rollup-linux-ppc64-gnu" "4.52.5"
"@rollup/rollup-win32-x64-msvc" "4.12.0" "@rollup/rollup-linux-riscv64-gnu" "4.52.5"
"@rollup/rollup-linux-riscv64-musl" "4.52.5"
"@rollup/rollup-linux-s390x-gnu" "4.52.5"
"@rollup/rollup-linux-x64-gnu" "4.52.5"
"@rollup/rollup-linux-x64-musl" "4.52.5"
"@rollup/rollup-openharmony-arm64" "4.52.5"
"@rollup/rollup-win32-arm64-msvc" "4.52.5"
"@rollup/rollup-win32-ia32-msvc" "4.52.5"
"@rollup/rollup-win32-x64-gnu" "4.52.5"
"@rollup/rollup-win32-x64-msvc" "4.52.5"
fsevents "~2.3.2" fsevents "~2.3.2"
rrweb-cssom@^0.6.0: rrweb-cssom@^0.6.0:
@ -4862,10 +4921,10 @@ snake-case@^3.0.4:
dot-case "^3.0.4" dot-case "^3.0.4"
tslib "^2.0.3" tslib "^2.0.3"
source-map-js@^1.0.2: source-map-js@^1.2.1:
version "1.0.2" version "1.2.1"
resolved "https://registry.yarnpkg.com/source-map-js/-/source-map-js-1.0.2.tgz#adbc361d9c62df380125e7f161f71c826f1e490c" resolved "https://registry.yarnpkg.com/source-map-js/-/source-map-js-1.2.1.tgz#1ce5650fddd87abc099eda37dcff024c2667ae46"
integrity sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw== integrity sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==
stackback@0.0.2: stackback@0.0.2:
version "0.0.2" version "0.0.2"
@ -5055,10 +5114,10 @@ tinybench@^2.5.1:
resolved "https://registry.yarnpkg.com/tinybench/-/tinybench-2.6.0.tgz#1423284ee22de07c91b3752c048d2764714b341b" resolved "https://registry.yarnpkg.com/tinybench/-/tinybench-2.6.0.tgz#1423284ee22de07c91b3752c048d2764714b341b"
integrity sha512-N8hW3PG/3aOoZAN5V/NSAEDz0ZixDSSt5b/a05iqtpgfLWMSVuCo7w0k2vVvEjdrIoeGqZzweX2WlyioNIHchA== integrity sha512-N8hW3PG/3aOoZAN5V/NSAEDz0ZixDSSt5b/a05iqtpgfLWMSVuCo7w0k2vVvEjdrIoeGqZzweX2WlyioNIHchA==
tinypool@^0.8.2: tinypool@^0.8.3:
version "0.8.2" version "0.8.4"
resolved "https://registry.yarnpkg.com/tinypool/-/tinypool-0.8.2.tgz#84013b03dc69dacb322563a475d4c0a9be00f82a" resolved "https://registry.yarnpkg.com/tinypool/-/tinypool-0.8.4.tgz#e217fe1270d941b39e98c625dcecebb1408c9aa8"
integrity sha512-SUszKYe5wgsxnNOVlBYO6IC+8VGWdVGZWAqUxp3UErNBtptZvWbwyUOyzNL59zigz2rCA92QiL3wvG+JDSdJdQ== integrity sha512-i11VH5gS6IFeLY3gMBQ00/MmLncVP7JLXOw1vlgkytLmJK7QnEr7NXf0LBdxfmNPAeyetukOk0bOYrJrFGjYJQ==
tinyspy@^2.2.0: tinyspy@^2.2.0:
version "2.2.1" version "2.2.1"
@ -5297,10 +5356,10 @@ util-deprecate@^1.0.2:
resolved "https://registry.yarnpkg.com/util-deprecate/-/util-deprecate-1.0.2.tgz#450d4dc9fa70de732762fbd2d4a28981419a0ccf" resolved "https://registry.yarnpkg.com/util-deprecate/-/util-deprecate-1.0.2.tgz#450d4dc9fa70de732762fbd2d4a28981419a0ccf"
integrity sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw== integrity sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==
vite-node@1.3.1: vite-node@1.6.1:
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/vite-node/-/vite-node-1.3.1.tgz#a93f7372212f5d5df38e945046b945ac3f4855d2" resolved "https://registry.yarnpkg.com/vite-node/-/vite-node-1.6.1.tgz#fff3ef309296ea03ceaa6ca4bb660922f5416c57"
integrity sha512-azbRrqRxlWTJEVbzInZCTchx0X69M/XPTCz4H+TLvlTcR/xH/3hkRqhOakT41fMJCMzXTu4UvegkZiEoJAWvng== integrity sha512-YAXkfvGtuTzwWbDSACdJSg4A4DZiAqckWe90Zapc/sEX3XvHcw1NdurM/6od8J207tSDqNbSsgdCacBgvJKFuA==
dependencies: dependencies:
cac "^6.7.14" cac "^6.7.14"
debug "^4.3.4" debug "^4.3.4"
@ -5327,27 +5386,27 @@ vite-tsconfig-paths@^3.5.0:
recrawl-sync "^2.0.3" recrawl-sync "^2.0.3"
tsconfig-paths "^4.0.0" tsconfig-paths "^4.0.0"
vite@^5.0.0, vite@^5.1.7: vite@^5.0.0, vite@^5.4.21:
version "5.1.7" version "5.4.21"
resolved "https://registry.yarnpkg.com/vite/-/vite-5.1.7.tgz#9f685a2c4c70707fef6d37341b0e809c366da619" resolved "https://registry.yarnpkg.com/vite/-/vite-5.4.21.tgz#84a4f7c5d860b071676d39ba513c0d598fdc7027"
integrity sha512-sgnEEFTZYMui/sTlH1/XEnVNHMujOahPLGMxn1+5sIT45Xjng1Ec1K78jRP15dSmVgg5WBin9yO81j3o9OxofA== integrity sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==
dependencies: dependencies:
esbuild "^0.19.3" esbuild "^0.21.3"
postcss "^8.4.35" postcss "^8.4.43"
rollup "^4.2.0" rollup "^4.20.0"
optionalDependencies: optionalDependencies:
fsevents "~2.3.3" fsevents "~2.3.3"
vitest@^1.3.1: vitest@^1.6.1:
version "1.3.1" version "1.6.1"
resolved "https://registry.yarnpkg.com/vitest/-/vitest-1.3.1.tgz#2d7e9861f030d88a4669392a4aecb40569d90937" resolved "https://registry.yarnpkg.com/vitest/-/vitest-1.6.1.tgz#b4a3097adf8f79ac18bc2e2e0024c534a7a78d2f"
integrity sha512-/1QJqXs8YbCrfv/GPQ05wAZf2eakUPLPa18vkJAKE7RXOKfVHqMZZ1WlTjiwl6Gcn65M5vpNUB6EFLnEdRdEXQ== integrity sha512-Ljb1cnSJSivGN0LqXd/zmDbWEM0RNNg2t1QW/XUhYl/qPqyu7CsqeWtqQXHVaJsecLPuDoak2oJcZN2QoRIOag==
dependencies: dependencies:
"@vitest/expect" "1.3.1" "@vitest/expect" "1.6.1"
"@vitest/runner" "1.3.1" "@vitest/runner" "1.6.1"
"@vitest/snapshot" "1.3.1" "@vitest/snapshot" "1.6.1"
"@vitest/spy" "1.3.1" "@vitest/spy" "1.6.1"
"@vitest/utils" "1.3.1" "@vitest/utils" "1.6.1"
acorn-walk "^8.3.2" acorn-walk "^8.3.2"
chai "^4.3.10" chai "^4.3.10"
debug "^4.3.4" debug "^4.3.4"
@ -5359,9 +5418,9 @@ vitest@^1.3.1:
std-env "^3.5.0" std-env "^3.5.0"
strip-literal "^2.0.0" strip-literal "^2.0.0"
tinybench "^2.5.1" tinybench "^2.5.1"
tinypool "^0.8.2" tinypool "^0.8.3"
vite "^5.0.0" vite "^5.0.0"
vite-node "1.3.1" vite-node "1.6.1"
why-is-node-running "^2.2.2" why-is-node-running "^2.2.2"
w3c-xmlserializer@^5.0.0: w3c-xmlserializer@^5.0.0:

@ -418,13 +418,13 @@ func parseSynoinfo(path string) (string, error) {
// Extract the CPU in the middle (88f6282 in the above example). // Extract the CPU in the middle (88f6282 in the above example).
s := bufio.NewScanner(f) s := bufio.NewScanner(f)
for s.Scan() { for s.Scan() {
l := s.Text() line := s.Text()
if !strings.HasPrefix(l, "unique=") { if !strings.HasPrefix(line, "unique=") {
continue continue
} }
parts := strings.SplitN(l, "_", 3) parts := strings.SplitN(line, "_", 3)
if len(parts) != 3 { if len(parts) != 3 {
return "", fmt.Errorf(`malformed %q: found %q, expected format like 'unique="synology_$cpu_$model'`, path, l) return "", fmt.Errorf(`malformed %q: found %q, expected format like 'unique="synology_$cpu_$model'`, path, line)
} }
return parts[1], nil return parts[1], nil
} }

@ -0,0 +1,311 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// cigocacher is an opinionated-to-Tailscale client for gocached. It connects
// at a URL like "https://ci-gocached-azure-1.corp.ts.net:31364", but that is
// stored in a GitHub actions variable so that its hostname can be updated for
// all branches at the same time in sync with the actual infrastructure.
//
// It authenticates using GitHub OIDC tokens, and all HTTP errors are ignored
// so that its failure mode is just that builds get slower and fall back to
// disk-only cache.
package main
import (
"bytes"
"context"
jsonv1 "encoding/json"
"errors"
"flag"
"fmt"
"io"
"log"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"sync/atomic"
"time"
"github.com/bradfitz/go-tool-cache/cacheproc"
"github.com/bradfitz/go-tool-cache/cachers"
)
func main() {
var (
auth = flag.Bool("auth", false, "auth with cigocached and exit, printing the access token as output")
token = flag.String("token", "", "the cigocached access token to use, as created using --auth")
cigocachedURL = flag.String("cigocached-url", "", "optional cigocached URL (scheme, host, and port). empty means to not use one.")
dir = flag.String("cache-dir", "", "cache directory; empty means automatic")
verbose = flag.Bool("verbose", false, "enable verbose logging")
)
flag.Parse()
if *auth {
if *cigocachedURL == "" {
log.Print("--cigocached-url is empty, skipping auth")
return
}
tk, err := fetchAccessToken(httpClient(), os.Getenv("ACTIONS_ID_TOKEN_REQUEST_URL"), os.Getenv("ACTIONS_ID_TOKEN_REQUEST_TOKEN"), *cigocachedURL)
if err != nil {
log.Printf("error fetching access token, skipping auth: %v", err)
return
}
fmt.Println(tk)
return
}
if *dir == "" {
d, err := os.UserCacheDir()
if err != nil {
log.Fatal(err)
}
*dir = filepath.Join(d, "go-cacher")
log.Printf("Defaulting to cache dir %v ...", *dir)
}
if err := os.MkdirAll(*dir, 0750); err != nil {
log.Fatal(err)
}
c := &cigocacher{
disk: &cachers.DiskCache{
Dir: *dir,
Verbose: *verbose,
},
verbose: *verbose,
}
if *cigocachedURL != "" {
if *verbose {
log.Printf("Using cigocached at %s", *cigocachedURL)
}
c.gocached = &gocachedClient{
baseURL: *cigocachedURL,
cl: httpClient(),
accessToken: *token,
verbose: *verbose,
}
}
var p *cacheproc.Process
p = &cacheproc.Process{
Close: func() error {
if c.verbose {
log.Printf("gocacheprog: closing; %d gets (%d hits, %d misses, %d errors); %d puts (%d errors)",
p.Gets.Load(), p.GetHits.Load(), p.GetMisses.Load(), p.GetErrors.Load(), p.Puts.Load(), p.PutErrors.Load())
}
return c.close()
},
Get: c.get,
Put: c.put,
}
if err := p.Run(); err != nil {
log.Fatal(err)
}
}
func httpClient() *http.Client {
return &http.Client{
Transport: &http.Transport{
DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) {
host, port, err := net.SplitHostPort(addr)
if err == nil {
// This does not run in a tailnet. We serve corp.ts.net
// TLS certs, and override DNS resolution to lookup the
// private IP for the VM by its hostname.
if vm, ok := strings.CutSuffix(host, ".corp.ts.net"); ok {
addr = net.JoinHostPort(vm, port)
}
}
var d net.Dialer
return d.DialContext(ctx, network, addr)
},
},
}
}
type cigocacher struct {
disk *cachers.DiskCache
gocached *gocachedClient
verbose bool
getNanos atomic.Int64 // total nanoseconds spent in gets
putNanos atomic.Int64 // total nanoseconds spent in puts
getHTTP atomic.Int64 // HTTP get requests made
getHTTPBytes atomic.Int64 // HTTP get bytes transferred
getHTTPHits atomic.Int64 // HTTP get hits
getHTTPMisses atomic.Int64 // HTTP get misses
getHTTPErrors atomic.Int64 // HTTP get errors ignored on best-effort basis
getHTTPNanos atomic.Int64 // total nanoseconds spent in HTTP gets
putHTTP atomic.Int64 // HTTP put requests made
putHTTPBytes atomic.Int64 // HTTP put bytes transferred
putHTTPErrors atomic.Int64 // HTTP put errors ignored on best-effort basis
putHTTPNanos atomic.Int64 // total nanoseconds spent in HTTP puts
}
func (c *cigocacher) get(ctx context.Context, actionID string) (outputID, diskPath string, err error) {
t0 := time.Now()
defer func() {
c.getNanos.Add(time.Since(t0).Nanoseconds())
}()
if c.gocached == nil {
return c.disk.Get(ctx, actionID)
}
outputID, diskPath, err = c.disk.Get(ctx, actionID)
if err == nil && outputID != "" {
return outputID, diskPath, nil
}
c.getHTTP.Add(1)
t0HTTP := time.Now()
defer func() {
c.getHTTPNanos.Add(time.Since(t0HTTP).Nanoseconds())
}()
outputID, res, err := c.gocached.get(ctx, actionID)
if err != nil {
c.getHTTPErrors.Add(1)
return "", "", nil
}
if outputID == "" || res == nil {
c.getHTTPMisses.Add(1)
return "", "", nil
}
defer res.Body.Close()
diskPath, err = put(c.disk, actionID, outputID, res.ContentLength, res.Body)
if err != nil {
return "", "", fmt.Errorf("error filling disk cache from HTTP: %w", err)
}
c.getHTTPHits.Add(1)
c.getHTTPBytes.Add(res.ContentLength)
return outputID, diskPath, nil
}
func (c *cigocacher) put(ctx context.Context, actionID, outputID string, size int64, r io.Reader) (diskPath string, err error) {
t0 := time.Now()
defer func() {
c.putNanos.Add(time.Since(t0).Nanoseconds())
}()
if c.gocached == nil {
return put(c.disk, actionID, outputID, size, r)
}
c.putHTTP.Add(1)
var diskReader, httpReader io.Reader
tee := &bestEffortTeeReader{r: r}
if size == 0 {
// Special case the empty file so NewRequest sets "Content-Length: 0",
// as opposed to thinking we didn't set it and not being able to sniff its size
// from the type.
diskReader, httpReader = bytes.NewReader(nil), bytes.NewReader(nil)
} else {
pr, pw := io.Pipe()
defer pw.Close()
// The diskReader is in the driving seat. We will try to forward data
// to httpReader as well, but only best-effort.
diskReader = tee
tee.w = pw
httpReader = pr
}
httpErrCh := make(chan error)
go func() {
t0HTTP := time.Now()
defer func() {
c.putHTTPNanos.Add(time.Since(t0HTTP).Nanoseconds())
}()
httpErrCh <- c.gocached.put(ctx, actionID, outputID, size, httpReader)
}()
diskPath, err = put(c.disk, actionID, outputID, size, diskReader)
if err != nil {
return "", fmt.Errorf("error writing to disk cache: %w", errors.Join(err, tee.err))
}
select {
case err := <-httpErrCh:
if err != nil {
c.putHTTPErrors.Add(1)
} else {
c.putHTTPBytes.Add(size)
}
case <-ctx.Done():
}
return diskPath, nil
}
func (c *cigocacher) close() error {
if !c.verbose || c.gocached == nil {
return nil
}
log.Printf("cigocacher HTTP stats: %d gets (%.1fMiB, %.2fs, %d hits, %d misses, %d errors ignored); %d puts (%.1fMiB, %.2fs, %d errors ignored)",
c.getHTTP.Load(), float64(c.getHTTPBytes.Load())/float64(1<<20), float64(c.getHTTPNanos.Load())/float64(time.Second), c.getHTTPHits.Load(), c.getHTTPMisses.Load(), c.getHTTPErrors.Load(),
c.putHTTP.Load(), float64(c.putHTTPBytes.Load())/float64(1<<20), float64(c.putHTTPNanos.Load())/float64(time.Second), c.putHTTPErrors.Load())
stats, err := c.gocached.fetchStats()
if err != nil {
log.Printf("error fetching gocached stats: %v", err)
} else {
log.Printf("gocached session stats: %s", stats)
}
return nil
}
func fetchAccessToken(cl *http.Client, idTokenURL, idTokenRequestToken, gocachedURL string) (string, error) {
req, err := http.NewRequest("GET", idTokenURL+"&audience=gocached", nil)
if err != nil {
return "", err
}
req.Header.Set("Authorization", "Bearer "+idTokenRequestToken)
resp, err := cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
type idTokenResp struct {
Value string `json:"value"`
}
var idToken idTokenResp
if err := jsonv1.NewDecoder(resp.Body).Decode(&idToken); err != nil {
return "", err
}
req, _ = http.NewRequest("POST", gocachedURL+"/auth/exchange-token", strings.NewReader(`{"jwt":"`+idToken.Value+`"}`))
req.Header.Set("Content-Type", "application/json")
resp, err = cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
type accessTokenResp struct {
AccessToken string `json:"access_token"`
}
var accessToken accessTokenResp
if err := jsonv1.NewDecoder(resp.Body).Decode(&accessToken); err != nil {
return "", err
}
return accessToken.AccessToken, nil
}
type bestEffortTeeReader struct {
r io.Reader
w io.WriteCloser
err error
}
func (t *bestEffortTeeReader) Read(p []byte) (int, error) {
n, err := t.r.Read(p)
if n > 0 && t.w != nil {
if _, err := t.w.Write(p[:n]); err != nil {
t.err = errors.Join(err, t.w.Close())
t.w = nil
}
}
return n, err
}

@ -0,0 +1,88 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os"
"path/filepath"
"time"
"github.com/bradfitz/go-tool-cache/cachers"
)
// indexEntry is the metadata that DiskCache stores on disk for an ActionID.
type indexEntry struct {
Version int `json:"v"`
OutputID string `json:"o"`
Size int64 `json:"n"`
TimeNanos int64 `json:"t"`
}
func validHex(x string) bool {
if len(x) < 4 || len(x) > 100 {
return false
}
for _, b := range x {
if b >= '0' && b <= '9' || b >= 'a' && b <= 'f' {
continue
}
return false
}
return true
}
// put is like dc.Put but refactored to support safe concurrent writes on Windows.
// TODO(tomhjp): upstream these changes to go-tool-cache once they look stable.
func put(dc *cachers.DiskCache, actionID, outputID string, size int64, body io.Reader) (diskPath string, _ error) {
if len(actionID) < 4 || len(outputID) < 4 {
return "", fmt.Errorf("actionID and outputID must be at least 4 characters long")
}
if !validHex(actionID) {
log.Printf("diskcache: got invalid actionID %q", actionID)
return "", errors.New("actionID must be hex")
}
if !validHex(outputID) {
log.Printf("diskcache: got invalid outputID %q", outputID)
return "", errors.New("outputID must be hex")
}
actionFile := dc.ActionFilename(actionID)
outputFile := dc.OutputFilename(outputID)
actionDir := filepath.Dir(actionFile)
outputDir := filepath.Dir(outputFile)
if err := os.MkdirAll(actionDir, 0755); err != nil {
return "", fmt.Errorf("failed to create action directory: %w", err)
}
if err := os.MkdirAll(outputDir, 0755); err != nil {
return "", fmt.Errorf("failed to create output directory: %w", err)
}
wrote, err := writeOutputFile(outputFile, body, size, outputID)
if err != nil {
return "", err
}
if wrote != size {
return "", fmt.Errorf("wrote %d bytes, expected %d", wrote, size)
}
ij, err := json.Marshal(indexEntry{
Version: 1,
OutputID: outputID,
Size: size,
TimeNanos: time.Now().UnixNano(),
})
if err != nil {
return "", err
}
if err := writeActionFile(dc.ActionFilename(actionID), ij); err != nil {
return "", fmt.Errorf("atomic write failed: %w", err)
}
return outputFile, nil
}

@ -0,0 +1,44 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !windows
package main
import (
"bytes"
"io"
"os"
"path/filepath"
)
func writeActionFile(dest string, b []byte) error {
_, err := writeAtomic(dest, bytes.NewReader(b))
return err
}
func writeOutputFile(dest string, r io.Reader, _ int64, _ string) (int64, error) {
return writeAtomic(dest, r)
}
func writeAtomic(dest string, r io.Reader) (int64, error) {
tf, err := os.CreateTemp(filepath.Dir(dest), filepath.Base(dest)+".*")
if err != nil {
return 0, err
}
size, err := io.Copy(tf, r)
if err != nil {
tf.Close()
os.Remove(tf.Name())
return 0, err
}
if err := tf.Close(); err != nil {
os.Remove(tf.Name())
return 0, err
}
if err := os.Rename(tf.Name(), dest); err != nil {
os.Remove(tf.Name())
return 0, err
}
return size, nil
}

@ -0,0 +1,102 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"crypto/sha256"
"errors"
"fmt"
"io"
"os"
)
// The functions in this file are based on go's own cache in
// cmd/go/internal/cache/cache.go, particularly putIndexEntry and copyFile.
// writeActionFile writes the indexEntry metadata for an ActionID to disk. It
// may be called for the same actionID concurrently from multiple processes,
// and the outputID for a specific actionID may change from time to time due
// to non-deterministic builds. It makes a best-effort to delete the file if
// anything goes wrong.
func writeActionFile(dest string, b []byte) (retErr error) {
f, err := os.OpenFile(dest, os.O_WRONLY|os.O_CREATE, 0o666)
if err != nil {
return err
}
defer func() {
cerr := f.Close()
if retErr != nil || cerr != nil {
retErr = errors.Join(retErr, cerr, os.Remove(dest))
}
}()
_, err = f.Write(b)
if err != nil {
return err
}
// Truncate the file only *after* writing it.
// (This should be a no-op, but truncate just in case of previous corruption.)
//
// This differs from os.WriteFile, which truncates to 0 *before* writing
// via os.O_TRUNC. Truncating only after writing ensures that a second write
// of the same content to the same file is idempotent, and does not - even
// temporarily! - undo the effect of the first write.
return f.Truncate(int64(len(b)))
}
// writeOutputFile writes content to be cached to disk. The outputID is the
// sha256 hash of the content, and each file should only be written ~once,
// assuming no sha256 hash collisions. It may be written multiple times if
// concurrent processes are both populating the same output. The file is opened
// with FILE_SHARE_READ|FILE_SHARE_WRITE, which means both processes can write
// the same contents concurrently without conflict.
//
// It makes a best effort to clean up if anything goes wrong, but the file may
// be left in an inconsistent state in the event of disk-related errors such as
// another process taking file locks, or power loss etc.
func writeOutputFile(dest string, r io.Reader, size int64, outputID string) (_ int64, retErr error) {
info, err := os.Stat(dest)
if err == nil && info.Size() == size {
// Already exists, check the hash.
if f, err := os.Open(dest); err == nil {
h := sha256.New()
io.Copy(h, f)
f.Close()
if fmt.Sprintf("%x", h.Sum(nil)) == outputID {
// Still drain the reader to ensure associated resources are released.
return io.Copy(io.Discard, r)
}
}
}
// Didn't successfully find the pre-existing file, write it.
mode := os.O_WRONLY | os.O_CREATE
if err == nil && info.Size() > size {
mode |= os.O_TRUNC // Should never happen, but self-heal.
}
f, err := os.OpenFile(dest, mode, 0644)
if err != nil {
return 0, fmt.Errorf("failed to open output file %q: %w", dest, err)
}
defer func() {
cerr := f.Close()
if retErr != nil || cerr != nil {
retErr = errors.Join(retErr, cerr, os.Remove(dest))
}
}()
// Copy file to f, but also into h to double-check hash.
h := sha256.New()
w := io.MultiWriter(f, h)
n, err := io.Copy(w, r)
if err != nil {
return 0, err
}
if fmt.Sprintf("%x", h.Sum(nil)) != outputID {
return 0, errors.New("file content changed underfoot")
}
return n, nil
}

@ -0,0 +1,115 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"context"
"fmt"
"io"
"log"
"net/http"
)
type gocachedClient struct {
baseURL string // base URL of the cacher server, like "http://localhost:31364".
cl *http.Client // http.Client to use.
accessToken string // Bearer token to use in the Authorization header.
verbose bool
}
// drainAndClose reads and throws away a small bounded amount of data. This is a
// best-effort attempt to allow connection reuse; Go's HTTP/1 Transport won't
// reuse a TCP connection unless you fully consume HTTP responses.
func drainAndClose(body io.ReadCloser) {
io.CopyN(io.Discard, body, 4<<10)
body.Close()
}
func tryReadErrorMessage(res *http.Response) []byte {
msg, _ := io.ReadAll(io.LimitReader(res.Body, 4<<10))
return msg
}
func (c *gocachedClient) get(ctx context.Context, actionID string) (outputID string, resp *http.Response, err error) {
// TODO(tomhjp): make sure we timeout if cigocached disappears, but for some
// reason, this seemed to tank network performance.
// // Set a generous upper limit on the time we'll wait for a response. We'll
// // shorten this deadline later once we know the content length.
// ctx, cancel := context.WithTimeout(ctx, time.Minute)
// defer cancel()
req, _ := http.NewRequestWithContext(ctx, "GET", c.baseURL+"/action/"+actionID, nil)
req.Header.Set("Want-Object", "1") // opt in to single roundtrip protocol
if c.accessToken != "" {
req.Header.Set("Authorization", "Bearer "+c.accessToken)
}
res, err := c.cl.Do(req)
if err != nil {
return "", nil, err
}
defer func() {
if resp == nil {
drainAndClose(res.Body)
}
}()
if res.StatusCode == http.StatusNotFound {
return "", nil, nil
}
if res.StatusCode != http.StatusOK {
msg := tryReadErrorMessage(res)
if c.verbose {
log.Printf("error GET /action/%s: %v, %s", actionID, res.Status, msg)
}
return "", nil, fmt.Errorf("unexpected GET /action/%s status %v", actionID, res.Status)
}
outputID = res.Header.Get("Go-Output-Id")
if outputID == "" {
return "", nil, fmt.Errorf("missing Go-Output-Id header in response")
}
if res.ContentLength == -1 {
return "", nil, fmt.Errorf("no Content-Length from server")
}
return outputID, res, nil
}
func (c *gocachedClient) put(ctx context.Context, actionID, outputID string, size int64, body io.Reader) error {
req, _ := http.NewRequestWithContext(ctx, "PUT", c.baseURL+"/"+actionID+"/"+outputID, body)
req.ContentLength = size
if c.accessToken != "" {
req.Header.Set("Authorization", "Bearer "+c.accessToken)
}
res, err := c.cl.Do(req)
if err != nil {
if c.verbose {
log.Printf("error PUT /%s/%s: %v", actionID, outputID, err)
}
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
msg := tryReadErrorMessage(res)
if c.verbose {
log.Printf("error PUT /%s/%s: %v, %s", actionID, outputID, res.Status, msg)
}
return fmt.Errorf("unexpected PUT /%s/%s status %v", actionID, outputID, res.Status)
}
return nil
}
func (c *gocachedClient) fetchStats() (string, error) {
req, _ := http.NewRequest("GET", c.baseURL+"/session/stats", nil)
req.Header.Set("Authorization", "Bearer "+c.accessToken)
resp, err := c.cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return string(b), nil
}

@ -192,45 +192,34 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("\t\tdst.%s[k] = append([]%s{}, src.%s[k]...)", fname, n, fname) writef("\t\tdst.%s[k] = append([]%s{}, src.%s[k]...)", fname, n, fname)
writef("\t}") writef("\t}")
writef("}") writef("}")
} else if codegen.ContainsPointers(elem) { } else if codegen.IsViewType(elem) || !codegen.ContainsPointers(elem) {
// If the map values are view types (which are
// immutable and don't need cloning) or don't
// themselves contain pointers, we can just
// clone the map itself.
it.Import("", "maps")
writef("\tdst.%s = maps.Clone(src.%s)", fname, fname)
} else {
// Otherwise we need to clone each element of
// the map using our recursive helper.
writef("if dst.%s != nil {", fname) writef("if dst.%s != nil {", fname)
writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem)) writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem))
writef("\tfor k, v := range src.%s {", fname) writef("\tfor k, v := range src.%s {", fname)
switch elem := elem.Underlying().(type) { // Use a recursive helper here; this handles
case *types.Pointer: // arbitrarily nested maps in addition to
writef("\t\tif v == nil { dst.%s[k] = nil } else {", fname) // simpler types.
if base := elem.Elem().Underlying(); codegen.ContainsPointers(base) { writeMapValueClone(mapValueCloneParams{
if _, isIface := base.(*types.Interface); isIface { Buf: buf,
it.Import("", "tailscale.com/types/ptr") It: it,
writef("\t\t\tdst.%s[k] = ptr.To((*v).Clone())", fname) Elem: elem,
} else { SrcExpr: "v",
writef("\t\t\tdst.%s[k] = v.Clone()", fname) DstExpr: fmt.Sprintf("dst.%s[k]", fname),
} BaseIndent: "\t",
} else { Depth: 1,
it.Import("", "tailscale.com/types/ptr") })
writef("\t\t\tdst.%s[k] = ptr.To(*v)", fname)
}
writef("}")
case *types.Interface:
if cloneResultType := methodResultType(elem, "Clone"); cloneResultType != nil {
if _, isPtr := cloneResultType.(*types.Pointer); isPtr {
writef("\t\tdst.%s[k] = *(v.Clone())", fname)
} else {
writef("\t\tdst.%s[k] = v.Clone()", fname)
}
} else {
writef(`panic("%s (%v) does not have a Clone method")`, fname, elem)
}
default:
writef("\t\tdst.%s[k] = *(v.Clone())", fname)
}
writef("\t}") writef("\t}")
writef("}") writef("}")
} else {
it.Import("", "maps")
writef("\tdst.%s = maps.Clone(src.%s)", fname, fname)
} }
case *types.Interface: case *types.Interface:
// If ft is an interface with a "Clone() ft" method, it can be used to clone the field. // If ft is an interface with a "Clone() ft" method, it can be used to clone the field.
@ -271,3 +260,99 @@ func methodResultType(typ types.Type, method string) types.Type {
} }
return sig.Results().At(0).Type() return sig.Results().At(0).Type()
} }
type mapValueCloneParams struct {
// Buf is the buffer to write generated code to
Buf *bytes.Buffer
// It is the import tracker for managing imports.
It *codegen.ImportTracker
// Elem is the type of the map value to clone
Elem types.Type
// SrcExpr is the expression for the source value (e.g., "v", "v2", "v3")
SrcExpr string
// DstExpr is the expression for the destination (e.g., "dst.Field[k]", "dst.Field[k][k2]")
DstExpr string
// BaseIndent is the "base" indentation string for the generated code
// (i.e. 1 or more tabs). Additional indentation will be added based on
// the Depth parameter.
BaseIndent string
// Depth is the current nesting depth (1 for first level, 2 for second, etc.)
Depth int
}
// writeMapValueClone generates code to clone a map value recursively.
// It handles arbitrary nesting of maps, pointers, and interfaces.
func writeMapValueClone(params mapValueCloneParams) {
indent := params.BaseIndent + strings.Repeat("\t", params.Depth)
writef := func(format string, args ...any) {
fmt.Fprintf(params.Buf, indent+format+"\n", args...)
}
switch elem := params.Elem.Underlying().(type) {
case *types.Pointer:
writef("if %s == nil { %s = nil } else {", params.SrcExpr, params.DstExpr)
if base := elem.Elem().Underlying(); codegen.ContainsPointers(base) {
if _, isIface := base.(*types.Interface); isIface {
params.It.Import("", "tailscale.com/types/ptr")
writef("\t%s = ptr.To((*%s).Clone())", params.DstExpr, params.SrcExpr)
} else {
writef("\t%s = %s.Clone()", params.DstExpr, params.SrcExpr)
}
} else {
params.It.Import("", "tailscale.com/types/ptr")
writef("\t%s = ptr.To(*%s)", params.DstExpr, params.SrcExpr)
}
writef("}")
case *types.Map:
// Recursively handle nested maps
innerElem := elem.Elem()
if codegen.IsViewType(innerElem) || !codegen.ContainsPointers(innerElem) {
// Inner map values don't need deep cloning
params.It.Import("", "maps")
writef("%s = maps.Clone(%s)", params.DstExpr, params.SrcExpr)
} else {
// Inner map values need cloning
keyType := params.It.QualifiedName(elem.Key())
valueType := params.It.QualifiedName(innerElem)
// Generate unique variable names for nested loops based on depth
keyVar := fmt.Sprintf("k%d", params.Depth+1)
valVar := fmt.Sprintf("v%d", params.Depth+1)
writef("if %s == nil {", params.SrcExpr)
writef("\t%s = nil", params.DstExpr)
writef("\tcontinue")
writef("}")
writef("%s = map[%s]%s{}", params.DstExpr, keyType, valueType)
writef("for %s, %s := range %s {", keyVar, valVar, params.SrcExpr)
// Recursively generate cloning code for the nested map value
nestedDstExpr := fmt.Sprintf("%s[%s]", params.DstExpr, keyVar)
writeMapValueClone(mapValueCloneParams{
Buf: params.Buf,
It: params.It,
Elem: innerElem,
SrcExpr: valVar,
DstExpr: nestedDstExpr,
BaseIndent: params.BaseIndent,
Depth: params.Depth + 1,
})
writef("}")
}
case *types.Interface:
if cloneResultType := methodResultType(elem, "Clone"); cloneResultType != nil {
if _, isPtr := cloneResultType.(*types.Pointer); isPtr {
writef("%s = *(%s.Clone())", params.DstExpr, params.SrcExpr)
} else {
writef("%s = %s.Clone()", params.DstExpr, params.SrcExpr)
}
} else {
writef(`panic("map value (%%v) does not have a Clone method")`, elem)
}
default:
writef("%s = *(%s.Clone())", params.DstExpr, params.SrcExpr)
}
}

@ -108,3 +108,109 @@ func TestInterfaceContainer(t *testing.T) {
}) })
} }
} }
func TestMapWithPointers(t *testing.T) {
num1, num2 := 42, 100
orig := &clonerex.MapWithPointers{
Nested: map[string]*int{
"foo": &num1,
"bar": &num2,
},
WithCloneMethod: map[string]*clonerex.SliceContainer{
"container1": {Slice: []*int{&num1, &num2}},
"container2": {Slice: []*int{&num1}},
},
CloneInterface: map[string]clonerex.Cloneable{
"impl1": &clonerex.CloneableImpl{Value: 123},
"impl2": &clonerex.CloneableImpl{Value: 456},
},
}
cloned := orig.Clone()
if !reflect.DeepEqual(orig, cloned) {
t.Errorf("Clone() = %v, want %v", cloned, orig)
}
// Mutate cloned.Nested pointer values
*cloned.Nested["foo"] = 999
if *orig.Nested["foo"] == 999 {
t.Errorf("Clone() aliased memory in Nested: original was modified")
}
// Mutate cloned.WithCloneMethod slice values
*cloned.WithCloneMethod["container1"].Slice[0] = 888
if *orig.WithCloneMethod["container1"].Slice[0] == 888 {
t.Errorf("Clone() aliased memory in WithCloneMethod: original was modified")
}
// Mutate cloned.CloneInterface values
if impl, ok := cloned.CloneInterface["impl1"].(*clonerex.CloneableImpl); ok {
impl.Value = 777
if origImpl, ok := orig.CloneInterface["impl1"].(*clonerex.CloneableImpl); ok {
if origImpl.Value == 777 {
t.Errorf("Clone() aliased memory in CloneInterface: original was modified")
}
}
}
}
func TestDeeplyNestedMap(t *testing.T) {
num := 123
orig := &clonerex.DeeplyNestedMap{
ThreeLevels: map[string]map[string]map[string]int{
"a": {
"b": {"c": 1, "d": 2},
"e": {"f": 3},
},
"g": {
"h": {"i": 4},
},
},
FourLevels: map[string]map[string]map[string]map[string]*clonerex.SliceContainer{
"l1a": {
"l2a": {
"l3a": {
"l4a": {Slice: []*int{&num}},
"l4b": {Slice: []*int{&num, &num}},
},
},
},
},
}
cloned := orig.Clone()
if !reflect.DeepEqual(orig, cloned) {
t.Errorf("Clone() = %v, want %v", cloned, orig)
}
// Mutate the clone's ThreeLevels map
cloned.ThreeLevels["a"]["b"]["c"] = 777
if orig.ThreeLevels["a"]["b"]["c"] == 777 {
t.Errorf("Clone() aliased memory in ThreeLevels: original was modified")
}
// Mutate the clone's FourLevels map at the deepest pointer level
*cloned.FourLevels["l1a"]["l2a"]["l3a"]["l4a"].Slice[0] = 666
if *orig.FourLevels["l1a"]["l2a"]["l3a"]["l4a"].Slice[0] == 666 {
t.Errorf("Clone() aliased memory in FourLevels: original was modified")
}
// Add a new top-level key to the clone's FourLevels map
newNum := 999
cloned.FourLevels["l1b"] = map[string]map[string]map[string]*clonerex.SliceContainer{
"l2b": {
"l3b": {
"l4c": {Slice: []*int{&newNum}},
},
},
}
if _, exists := orig.FourLevels["l1b"]; exists {
t.Errorf("Clone() aliased FourLevels map: new top-level key appeared in original")
}
// Add a new nested key to the clone's FourLevels map
cloned.FourLevels["l1a"]["l2a"]["l3a"]["l4c"] = &clonerex.SliceContainer{Slice: []*int{&newNum}}
if _, exists := orig.FourLevels["l1a"]["l2a"]["l3a"]["l4c"]; exists {
t.Errorf("Clone() aliased FourLevels map: new nested key appeared in original")
}
}

@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS // Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause // SPDX-License-Identifier: BSD-3-Clause
//go:generate go run tailscale.com/cmd/cloner -clonefunc=true -type SliceContainer,InterfaceContainer //go:generate go run tailscale.com/cmd/cloner -clonefunc=true -type SliceContainer,InterfaceContainer,MapWithPointers,DeeplyNestedMap
// Package clonerex is an example package for the cloner tool. // Package clonerex is an example package for the cloner tool.
package clonerex package clonerex
@ -32,3 +32,15 @@ func (c *CloneableImpl) Clone() Cloneable {
type InterfaceContainer struct { type InterfaceContainer struct {
Interface Cloneable Interface Cloneable
} }
type MapWithPointers struct {
Nested map[string]*int
WithCloneMethod map[string]*SliceContainer
CloneInterface map[string]Cloneable
}
// DeeplyNestedMap tests arbitrary depth of map nesting (3+ levels)
type DeeplyNestedMap struct {
ThreeLevels map[string]map[string]map[string]int
FourLevels map[string]map[string]map[string]map[string]*SliceContainer
}

@ -6,6 +6,8 @@
package clonerex package clonerex
import ( import (
"maps"
"tailscale.com/types/ptr" "tailscale.com/types/ptr"
) )
@ -54,9 +56,114 @@ var _InterfaceContainerCloneNeedsRegeneration = InterfaceContainer(struct {
Interface Cloneable Interface Cloneable
}{}) }{})
// Clone makes a deep copy of MapWithPointers.
// The result aliases no memory with the original.
func (src *MapWithPointers) Clone() *MapWithPointers {
if src == nil {
return nil
}
dst := new(MapWithPointers)
*dst = *src
if dst.Nested != nil {
dst.Nested = map[string]*int{}
for k, v := range src.Nested {
if v == nil {
dst.Nested[k] = nil
} else {
dst.Nested[k] = ptr.To(*v)
}
}
}
if dst.WithCloneMethod != nil {
dst.WithCloneMethod = map[string]*SliceContainer{}
for k, v := range src.WithCloneMethod {
if v == nil {
dst.WithCloneMethod[k] = nil
} else {
dst.WithCloneMethod[k] = v.Clone()
}
}
}
if dst.CloneInterface != nil {
dst.CloneInterface = map[string]Cloneable{}
for k, v := range src.CloneInterface {
dst.CloneInterface[k] = v.Clone()
}
}
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _MapWithPointersCloneNeedsRegeneration = MapWithPointers(struct {
Nested map[string]*int
WithCloneMethod map[string]*SliceContainer
CloneInterface map[string]Cloneable
}{})
// Clone makes a deep copy of DeeplyNestedMap.
// The result aliases no memory with the original.
func (src *DeeplyNestedMap) Clone() *DeeplyNestedMap {
if src == nil {
return nil
}
dst := new(DeeplyNestedMap)
*dst = *src
if dst.ThreeLevels != nil {
dst.ThreeLevels = map[string]map[string]map[string]int{}
for k, v := range src.ThreeLevels {
if v == nil {
dst.ThreeLevels[k] = nil
continue
}
dst.ThreeLevels[k] = map[string]map[string]int{}
for k2, v2 := range v {
dst.ThreeLevels[k][k2] = maps.Clone(v2)
}
}
}
if dst.FourLevels != nil {
dst.FourLevels = map[string]map[string]map[string]map[string]*SliceContainer{}
for k, v := range src.FourLevels {
if v == nil {
dst.FourLevels[k] = nil
continue
}
dst.FourLevels[k] = map[string]map[string]map[string]*SliceContainer{}
for k2, v2 := range v {
if v2 == nil {
dst.FourLevels[k][k2] = nil
continue
}
dst.FourLevels[k][k2] = map[string]map[string]*SliceContainer{}
for k3, v3 := range v2 {
if v3 == nil {
dst.FourLevels[k][k2][k3] = nil
continue
}
dst.FourLevels[k][k2][k3] = map[string]*SliceContainer{}
for k4, v4 := range v3 {
if v4 == nil {
dst.FourLevels[k][k2][k3][k4] = nil
} else {
dst.FourLevels[k][k2][k3][k4] = v4.Clone()
}
}
}
}
}
}
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _DeeplyNestedMapCloneNeedsRegeneration = DeeplyNestedMap(struct {
ThreeLevels map[string]map[string]map[string]int
FourLevels map[string]map[string]map[string]map[string]*SliceContainer
}{})
// Clone duplicates src into dst and reports whether it succeeded. // Clone duplicates src into dst and reports whether it succeeded.
// To succeed, <src, dst> must be of types <*T, *T> or <*T, **T>, // To succeed, <src, dst> must be of types <*T, *T> or <*T, **T>,
// where T is one of SliceContainer,InterfaceContainer. // where T is one of SliceContainer,InterfaceContainer,MapWithPointers,DeeplyNestedMap.
func Clone(dst, src any) bool { func Clone(dst, src any) bool {
switch src := src.(type) { switch src := src.(type) {
case *SliceContainer: case *SliceContainer:
@ -77,6 +184,24 @@ func Clone(dst, src any) bool {
*dst = src.Clone() *dst = src.Clone()
return true return true
} }
case *MapWithPointers:
switch dst := dst.(type) {
case *MapWithPointers:
*dst = *src.Clone()
return true
case **MapWithPointers:
*dst = src.Clone()
return true
}
case *DeeplyNestedMap:
switch dst := dst.(type) {
case *DeeplyNestedMap:
*dst = *src.Clone()
return true
case **DeeplyNestedMap:
*dst = src.Clone()
return true
}
} }
return false return false
} }

@ -1287,8 +1287,8 @@ type localAPI struct {
notify *ipn.Notify notify *ipn.Notify
} }
func (l *localAPI) Start() error { func (lc *localAPI) Start() error {
path := filepath.Join(l.FSRoot, "tmp/tailscaled.sock.fake") path := filepath.Join(lc.FSRoot, "tmp/tailscaled.sock.fake")
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil { if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
return err return err
} }
@ -1298,30 +1298,30 @@ func (l *localAPI) Start() error {
return err return err
} }
l.srv = &http.Server{ lc.srv = &http.Server{
Handler: l, Handler: lc,
} }
l.Path = path lc.Path = path
l.cond = sync.NewCond(&l.Mutex) lc.cond = sync.NewCond(&lc.Mutex)
go l.srv.Serve(ln) go lc.srv.Serve(ln)
return nil return nil
} }
func (l *localAPI) Close() { func (lc *localAPI) Close() {
l.srv.Close() lc.srv.Close()
} }
func (l *localAPI) Notify(n *ipn.Notify) { func (lc *localAPI) Notify(n *ipn.Notify) {
if n == nil { if n == nil {
return return
} }
l.Lock() lc.Lock()
defer l.Unlock() defer lc.Unlock()
l.notify = n lc.notify = n
l.cond.Broadcast() lc.cond.Broadcast()
} }
func (l *localAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) { func (lc *localAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path { switch r.URL.Path {
case "/localapi/v0/serve-config": case "/localapi/v0/serve-config":
if r.Method != "POST" { if r.Method != "POST" {
@ -1348,11 +1348,11 @@ func (l *localAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {
f.Flush() f.Flush()
} }
enc := json.NewEncoder(w) enc := json.NewEncoder(w)
l.Lock() lc.Lock()
defer l.Unlock() defer lc.Unlock()
for { for {
if l.notify != nil { if lc.notify != nil {
if err := enc.Encode(l.notify); err != nil { if err := enc.Encode(lc.notify); err != nil {
// Usually broken pipe as the test client disconnects. // Usually broken pipe as the test client disconnects.
return return
} }
@ -1360,7 +1360,7 @@ func (l *localAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {
f.Flush() f.Flush()
} }
} }
l.cond.Wait() lc.cond.Wait()
} }
} }

@ -2,6 +2,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
filippo.io/edwards25519 from github.com/hdevalence/ed25519consensus filippo.io/edwards25519 from github.com/hdevalence/ed25519consensus
filippo.io/edwards25519/field from filippo.io/edwards25519 filippo.io/edwards25519/field from filippo.io/edwards25519
github.com/axiomhq/hyperloglog from tailscale.com/derp/derpserver
github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus
💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus 💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus
github.com/coder/websocket from tailscale.com/cmd/derper+ github.com/coder/websocket from tailscale.com/cmd/derper+
@ -9,6 +10,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/coder/websocket/internal/util from github.com/coder/websocket github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket github.com/coder/websocket/internal/xsync from github.com/coder/websocket
W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil
github.com/dgryski/go-metro from github.com/axiomhq/hyperloglog
github.com/fxamacker/cbor/v2 from tailscale.com/tka github.com/fxamacker/cbor/v2 from tailscale.com/tka
github.com/go-json-experiment/json from tailscale.com/types/opt+ github.com/go-json-experiment/json from tailscale.com/types/opt+
github.com/go-json-experiment/json/internal from github.com/go-json-experiment/json+ github.com/go-json-experiment/json/internal from github.com/go-json-experiment/json+
@ -30,9 +32,9 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+
LD github.com/prometheus/procfs from github.com/prometheus/client_golang/prometheus L github.com/prometheus/procfs from github.com/prometheus/client_golang/prometheus
LD github.com/prometheus/procfs/internal/fs from github.com/prometheus/procfs L github.com/prometheus/procfs/internal/fs from github.com/prometheus/procfs
LD github.com/prometheus/procfs/internal/util from github.com/prometheus/procfs L github.com/prometheus/procfs/internal/util from github.com/prometheus/procfs
W 💣 github.com/tailscale/go-winio from tailscale.com/safesocket W 💣 github.com/tailscale/go-winio from tailscale.com/safesocket
W 💣 github.com/tailscale/go-winio/internal/fs from github.com/tailscale/go-winio W 💣 github.com/tailscale/go-winio/internal/fs from github.com/tailscale/go-winio
W 💣 github.com/tailscale/go-winio/internal/socket from github.com/tailscale/go-winio W 💣 github.com/tailscale/go-winio/internal/socket from github.com/tailscale/go-winio
@ -72,7 +74,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+ google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+ google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+
google.golang.org/protobuf/runtime/protoimpl from github.com/prometheus/client_model/go+ google.golang.org/protobuf/runtime/protoimpl from github.com/prometheus/client_model/go+
google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+ 💣 google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
tailscale.com from tailscale.com/version tailscale.com from tailscale.com/version
💣 tailscale.com/atomicfile from tailscale.com/cmd/derper+ 💣 tailscale.com/atomicfile from tailscale.com/cmd/derper+
tailscale.com/client/local from tailscale.com/derp/derpserver tailscale.com/client/local from tailscale.com/derp/derpserver
@ -116,7 +118,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/syncs from tailscale.com/cmd/derper+ tailscale.com/syncs from tailscale.com/cmd/derper+
tailscale.com/tailcfg from tailscale.com/client/local+ tailscale.com/tailcfg from tailscale.com/client/local+
tailscale.com/tka from tailscale.com/client/local+ tailscale.com/tka from tailscale.com/client/local+
LW tailscale.com/tsconst from tailscale.com/net/netmon+ tailscale.com/tsconst from tailscale.com/net/netmon+
tailscale.com/tstime from tailscale.com/derp+ tailscale.com/tstime from tailscale.com/derp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/derp/derpserver tailscale.com/tstime/rate from tailscale.com/derp/derpserver
@ -139,7 +141,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/structs from tailscale.com/ipn+ tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/client/local+ tailscale.com/types/tkatype from tailscale.com/client/local+
tailscale.com/types/views from tailscale.com/ipn+ tailscale.com/types/views from tailscale.com/ipn+
tailscale.com/util/cibuild from tailscale.com/health tailscale.com/util/cibuild from tailscale.com/health+
tailscale.com/util/clientmetric from tailscale.com/net/netmon tailscale.com/util/clientmetric from tailscale.com/net/netmon
tailscale.com/util/cloudenv from tailscale.com/hostinfo+ tailscale.com/util/cloudenv from tailscale.com/hostinfo+
tailscale.com/util/ctxkey from tailscale.com/tsweb+ tailscale.com/util/ctxkey from tailscale.com/tsweb+

@ -481,32 +481,32 @@ func newRateLimitedListener(ln net.Listener, limit rate.Limit, burst int) *rateL
return &rateLimitedListener{Listener: ln, lim: rate.NewLimiter(limit, burst)} return &rateLimitedListener{Listener: ln, lim: rate.NewLimiter(limit, burst)}
} }
func (l *rateLimitedListener) ExpVar() expvar.Var { func (ln *rateLimitedListener) ExpVar() expvar.Var {
m := new(metrics.Set) m := new(metrics.Set)
m.Set("counter_accepted_connections", &l.numAccepts) m.Set("counter_accepted_connections", &ln.numAccepts)
m.Set("counter_rejected_connections", &l.numRejects) m.Set("counter_rejected_connections", &ln.numRejects)
return m return m
} }
var errLimitedConn = errors.New("cannot accept connection; rate limited") var errLimitedConn = errors.New("cannot accept connection; rate limited")
func (l *rateLimitedListener) Accept() (net.Conn, error) { func (ln *rateLimitedListener) Accept() (net.Conn, error) {
// Even under a rate limited situation, we accept the connection immediately // Even under a rate limited situation, we accept the connection immediately
// and close it, rather than being slow at accepting new connections. // and close it, rather than being slow at accepting new connections.
// This provides two benefits: 1) it signals to the client that something // This provides two benefits: 1) it signals to the client that something
// is going on on the server, and 2) it prevents new connections from // is going on on the server, and 2) it prevents new connections from
// piling up and occupying resources in the OS kernel. // piling up and occupying resources in the OS kernel.
// The client will retry as needing (with backoffs in place). // The client will retry as needing (with backoffs in place).
cn, err := l.Listener.Accept() cn, err := ln.Listener.Accept()
if err != nil { if err != nil {
return nil, err return nil, err
} }
if !l.lim.Allow() { if !ln.lim.Allow() {
l.numRejects.Add(1) ln.numRejects.Add(1)
cn.Close() cn.Close()
return nil, errLimitedConn return nil, errLimitedConn
} }
l.numAccepts.Add(1) ln.numAccepts.Add(1)
return cn, nil return cn, nil
} }

@ -0,0 +1,175 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"bytes"
"go/ast"
"go/format"
"go/parser"
"go/token"
"go/types"
"path"
"slices"
"strconv"
"strings"
"tailscale.com/util/must"
)
// mustFormatFile formats a Go source file and adjust "json" imports.
// It panics if there are any parsing errors.
//
// - "encoding/json" is imported under the name "jsonv1" or "jsonv1std"
// - "encoding/json/v2" is rewritten to import "github.com/go-json-experiment/json" instead
// - "encoding/json/jsontext" is rewritten to import "github.com/go-json-experiment/json/jsontext" instead
// - "github.com/go-json-experiment/json" is imported under the name "jsonv2"
// - "github.com/go-json-experiment/json/v1" is imported under the name "jsonv1"
//
// If no changes to the file is made, it returns input.
func mustFormatFile(in []byte) (out []byte) {
fset := token.NewFileSet()
f := must.Get(parser.ParseFile(fset, "", in, parser.ParseComments))
// Check for the existence of "json" imports.
jsonImports := make(map[string][]*ast.ImportSpec)
for _, imp := range f.Imports {
switch pkgPath := must.Get(strconv.Unquote(imp.Path.Value)); pkgPath {
case
"encoding/json",
"encoding/json/v2",
"encoding/json/jsontext",
"github.com/go-json-experiment/json",
"github.com/go-json-experiment/json/v1",
"github.com/go-json-experiment/json/jsontext":
jsonImports[pkgPath] = append(jsonImports[pkgPath], imp)
}
}
if len(jsonImports) == 0 {
return in
}
// Best-effort local type-check of the file
// to resolve local declarations to detect shadowed variables.
typeInfo := &types.Info{Uses: make(map[*ast.Ident]types.Object)}
(&types.Config{
Error: func(err error) {},
}).Check("", fset, []*ast.File{f}, typeInfo)
// Rewrite imports to instead use "github.com/go-json-experiment/json".
// This ensures that code continues to build even if
// goexperiment.jsonv2 is *not* specified.
// As of https://github.com/go-json-experiment/json/pull/186,
// imports to "github.com/go-json-experiment/json" are identical
// to the standard library if built with goexperiment.jsonv2.
for fromPath, toPath := range map[string]string{
"encoding/json/v2": "github.com/go-json-experiment/json",
"encoding/json/jsontext": "github.com/go-json-experiment/json/jsontext",
} {
for _, imp := range jsonImports[fromPath] {
imp.Path.Value = strconv.Quote(toPath)
jsonImports[toPath] = append(jsonImports[toPath], imp)
}
delete(jsonImports, fromPath)
}
// While in a transitory state, where both v1 and v2 json imports
// may exist in our codebase, always explicitly import with
// either jsonv1 or jsonv2 in the package name to avoid ambiguities
// when looking at a particular Marshal or Unmarshal call site.
renames := make(map[string]string) // mapping of old names to new names
deletes := make(map[*ast.ImportSpec]bool) // set of imports to delete
for pkgPath, imps := range jsonImports {
var newName string
switch pkgPath {
case "encoding/json":
newName = "jsonv1"
// If "github.com/go-json-experiment/json/v1" is also imported,
// then use jsonv1std for "encoding/json" to avoid a conflict.
if len(jsonImports["github.com/go-json-experiment/json/v1"]) > 0 {
newName += "std"
}
case "github.com/go-json-experiment/json":
newName = "jsonv2"
case "github.com/go-json-experiment/json/v1":
newName = "jsonv1"
}
// Rename the import if different than expected.
if oldName := importName(imps[0]); oldName != newName && newName != "" {
renames[oldName] = newName
pos := imps[0].Pos() // preserve original positioning
imps[0].Name = ast.NewIdent(newName)
imps[0].Name.NamePos = pos
}
// For all redundant imports, use the first imported name.
for _, imp := range imps[1:] {
renames[importName(imp)] = importName(imps[0])
deletes[imp] = true
}
}
if len(deletes) > 0 {
f.Imports = slices.DeleteFunc(f.Imports, func(imp *ast.ImportSpec) bool {
return deletes[imp]
})
for _, decl := range f.Decls {
if genDecl, ok := decl.(*ast.GenDecl); ok && genDecl.Tok == token.IMPORT {
genDecl.Specs = slices.DeleteFunc(genDecl.Specs, func(spec ast.Spec) bool {
return deletes[spec.(*ast.ImportSpec)]
})
}
}
}
if len(renames) > 0 {
ast.Walk(astVisitor(func(n ast.Node) bool {
if sel, ok := n.(*ast.SelectorExpr); ok {
if id, ok := sel.X.(*ast.Ident); ok {
// Just because the selector looks like "json.Marshal"
// does not mean that it is referencing the "json" package.
// There could be a local "json" declaration that shadows
// the package import. Check partial type information
// to see if there was a local declaration.
if obj, ok := typeInfo.Uses[id]; ok {
if _, ok := obj.(*types.PkgName); !ok {
return true
}
}
if newName, ok := renames[id.String()]; ok {
id.Name = newName
}
}
}
return true
}), f)
}
bb := new(bytes.Buffer)
must.Do(format.Node(bb, fset, f))
return must.Get(format.Source(bb.Bytes()))
}
// importName is the local package name used for an import.
// If no explicit local name is used, then it uses string parsing
// to derive the package name from the path, relying on the convention
// that the package name is the base name of the package path.
func importName(imp *ast.ImportSpec) string {
if imp.Name != nil {
return imp.Name.String()
}
pkgPath, _ := strconv.Unquote(imp.Path.Value)
pkgPath = strings.TrimRight(pkgPath, "/v0123456789") // exclude version directories
return path.Base(pkgPath)
}
// astVisitor is a function that implements [ast.Visitor].
type astVisitor func(ast.Node) bool
func (f astVisitor) Visit(node ast.Node) ast.Visitor {
if !f(node) {
return nil
}
return f
}

@ -0,0 +1,162 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"go/format"
"testing"
"tailscale.com/util/must"
"tailscale.com/util/safediff"
)
func TestFormatFile(t *testing.T) {
tests := []struct{ in, want string }{{
in: `package foobar
import (
"encoding/json"
jsonv2exp "github.com/go-json-experiment/json"
)
func main() {
json.Marshal()
jsonv2exp.Marshal()
{
var json T // deliberately shadow "json" package name
json.Marshal() // should not be re-written
}
}
`,
want: `package foobar
import (
jsonv1 "encoding/json"
jsonv2 "github.com/go-json-experiment/json"
)
func main() {
jsonv1.Marshal()
jsonv2.Marshal()
{
var json T // deliberately shadow "json" package name
json.Marshal() // should not be re-written
}
}
`,
}, {
in: `package foobar
import (
"github.com/go-json-experiment/json"
jsonv2exp "github.com/go-json-experiment/json"
)
func main() {
json.Marshal()
jsonv2exp.Marshal()
}
`,
want: `package foobar
import (
jsonv2 "github.com/go-json-experiment/json"
)
func main() {
jsonv2.Marshal()
jsonv2.Marshal()
}
`,
}, {
in: `package foobar
import "github.com/go-json-experiment/json/v1"
func main() {
json.Marshal()
}
`,
want: `package foobar
import jsonv1 "github.com/go-json-experiment/json/v1"
func main() {
jsonv1.Marshal()
}
`,
}, {
in: `package foobar
import (
"encoding/json"
jsonv1in2 "github.com/go-json-experiment/json/v1"
)
func main() {
json.Marshal()
jsonv1in2.Marshal()
}
`,
want: `package foobar
import (
jsonv1std "encoding/json"
jsonv1 "github.com/go-json-experiment/json/v1"
)
func main() {
jsonv1std.Marshal()
jsonv1.Marshal()
}
`,
}, {
in: `package foobar
import (
"encoding/json"
jsonv1in2 "github.com/go-json-experiment/json/v1"
)
func main() {
json.Marshal()
jsonv1in2.Marshal()
}
`,
want: `package foobar
import (
jsonv1std "encoding/json"
jsonv1 "github.com/go-json-experiment/json/v1"
)
func main() {
jsonv1std.Marshal()
jsonv1.Marshal()
}
`,
}, {
in: `package foobar
import (
"encoding/json"
j2 "encoding/json/v2"
"encoding/json/jsontext"
)
func main() {
json.Marshal()
j2.Marshal()
jsontext.NewEncoder
}
`,
want: `package foobar
import (
jsonv1 "encoding/json"
jsonv2 "github.com/go-json-experiment/json"
"github.com/go-json-experiment/json/jsontext"
)
func main() {
jsonv1.Marshal()
jsonv2.Marshal()
jsontext.NewEncoder
}
`,
}}
for _, tt := range tests {
got := string(must.Get(format.Source([]byte(tt.in))))
got = string(mustFormatFile([]byte(got)))
want := string(must.Get(format.Source([]byte(tt.want))))
if got != want {
diff, _ := safediff.Lines(got, want, -1)
t.Errorf("mismatch (-got +want)\n%s", diff)
t.Error(got)
t.Error(want)
}
}
}

@ -0,0 +1,124 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// The jsonimports tool formats all Go source files in the repository
// to enforce that "json" imports are consistent.
//
// With Go 1.25, the "encoding/json/v2" and "encoding/json/jsontext"
// packages are now available under goexperiment.jsonv2.
// This leads to possible confusion over the following:
//
// - "encoding/json"
// - "encoding/json/v2"
// - "encoding/json/jsontext"
// - "github.com/go-json-experiment/json/v1"
// - "github.com/go-json-experiment/json"
// - "github.com/go-json-experiment/json/jsontext"
//
// In order to enforce consistent usage, we apply the following rules:
//
// - Until the Go standard library formally accepts "encoding/json/v2"
// and "encoding/json/jsontext" into the standard library
// (i.e., they are no longer considered experimental),
// we forbid any code from directly importing those packages.
// Go code should instead import "github.com/go-json-experiment/json"
// and "github.com/go-json-experiment/json/jsontext".
// The latter packages contain aliases to the standard library
// if built on Go 1.25 with the goexperiment.jsonv2 tag specified.
//
// - Imports of "encoding/json" or "github.com/go-json-experiment/json/v1"
// must be explicitly imported under the package name "jsonv1".
// If both packages need to be imported, then the former should
// be imported under the package name "jsonv1std".
//
// - Imports of "github.com/go-json-experiment/json"
// must be explicitly imported under the package name "jsonv2".
//
// The latter two rules exist to provide clarity when reading code.
// Without them, it is unclear whether "json.Marshal" refers to v1 or v2.
// With them, however, it is clear that "jsonv1.Marshal" is calling v1 and
// that "jsonv2.Marshal" is calling v2.
//
// TODO(@joetsai): At this present moment, there is no guidance given on
// whether to use v1 or v2 for newly written Go source code.
// I will write a document in the near future providing more guidance.
// Feel free to continue using v1 "encoding/json" as you are accustomed to.
package main
import (
"bytes"
"flag"
"fmt"
"os"
"os/exec"
"runtime"
"strings"
"sync"
"tailscale.com/syncs"
"tailscale.com/util/must"
"tailscale.com/util/safediff"
)
func main() {
update := flag.Bool("update", false, "update all Go source files")
flag.Parse()
// Change working directory to Git repository root.
repoRoot := strings.TrimSuffix(string(must.Get(exec.Command(
"git", "rev-parse", "--show-toplevel",
).Output())), "\n")
must.Do(os.Chdir(repoRoot))
// Iterate over all indexed files in the Git repository.
var printMu sync.Mutex
var group sync.WaitGroup
sema := syncs.NewSemaphore(runtime.NumCPU())
var numDiffs int
files := string(must.Get(exec.Command("git", "ls-files").Output()))
for file := range strings.Lines(files) {
sema.Acquire()
group.Go(func() {
defer sema.Release()
// Ignore non-Go source files.
file = strings.TrimSuffix(file, "\n")
if !strings.HasSuffix(file, ".go") {
return
}
// Format all "json" imports in the Go source file.
srcIn := must.Get(os.ReadFile(file))
srcOut := mustFormatFile(srcIn)
// Print differences with each formatted file.
if !bytes.Equal(srcIn, srcOut) {
numDiffs++
printMu.Lock()
fmt.Println(file)
lines, _ := safediff.Lines(string(srcIn), string(srcOut), -1)
for line := range strings.Lines(lines) {
fmt.Print("\t", line)
}
fmt.Println()
printMu.Unlock()
// If -update is specified, write out the changes.
if *update {
mode := must.Get(os.Stat(file)).Mode()
must.Do(os.WriteFile(file, srcOut, mode))
}
}
})
}
group.Wait()
// Report whether any differences were detected.
if numDiffs > 0 && !*update {
fmt.Printf(`%d files with "json" imports that need formatting`+"\n", numDiffs)
fmt.Println("Please run:")
fmt.Println("\t./tool/go run tailscale.com/cmd/jsonimports -update")
os.Exit(1)
}
}

@ -157,12 +157,6 @@ func (r *KubeAPIServerTSServiceReconciler) maybeProvision(ctx context.Context, s
// 1. Check there isn't a Tailscale Service with the same hostname // 1. Check there isn't a Tailscale Service with the same hostname
// already created and not owned by this ProxyGroup. // already created and not owned by this ProxyGroup.
existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName) existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName)
if isErrorFeatureFlagNotEnabled(err) {
logger.Warn(msgFeatureFlagNotEnabled)
r.recorder.Event(pg, corev1.EventTypeWarning, warningTailscaleServiceFeatureFlagNotEnabled, msgFeatureFlagNotEnabled)
tsoperator.SetProxyGroupCondition(pg, tsapi.KubeAPIServerProxyValid, metav1.ConditionFalse, reasonKubeAPIServerProxyInvalid, msgFeatureFlagNotEnabled, pg.Generation, r.clock, logger)
return nil
}
if err != nil && !isErrorTailscaleServiceNotFound(err) { if err != nil && !isErrorTailscaleServiceNotFound(err) {
return fmt.Errorf("error getting Tailscale Service %q: %w", serviceName, err) return fmt.Errorf("error getting Tailscale Service %q: %w", serviceName, err)
} }

@ -182,9 +182,7 @@ func TestAPIServerProxyReconciler(t *testing.T) {
expectEqual(t, fc, certSecretRoleBinding(pg, ns, defaultDomain)) expectEqual(t, fc, certSecretRoleBinding(pg, ns, defaultDomain))
// Simulate certs being issued; should observe AdvertiseServices config change. // Simulate certs being issued; should observe AdvertiseServices config change.
if err := populateTLSSecret(t.Context(), fc, pgName, defaultDomain); err != nil { populateTLSSecret(t, fc, pgName, defaultDomain)
t.Fatalf("populating TLS Secret: %v", err)
}
expectReconciled(t, r, "", pgName) expectReconciled(t, r, "", pgName)
expectedCfg.AdvertiseServices = []string{"svc:" + pgName} expectedCfg.AdvertiseServices = []string{"svc:" + pgName}
@ -247,9 +245,7 @@ func TestAPIServerProxyReconciler(t *testing.T) {
expectMissing[rbacv1.RoleBinding](t, fc, ns, defaultDomain) expectMissing[rbacv1.RoleBinding](t, fc, ns, defaultDomain)
// Check we get the new hostname in the status once ready. // Check we get the new hostname in the status once ready.
if err := populateTLSSecret(t.Context(), fc, pgName, updatedDomain); err != nil { populateTLSSecret(t, fc, pgName, updatedDomain)
t.Fatalf("populating TLS Secret: %v", err)
}
mustUpdate(t, fc, "operator-ns", "test-pg-0", func(s *corev1.Secret) { mustUpdate(t, fc, "operator-ns", "test-pg-0", func(s *corev1.Secret) {
s.Data["profile-foo"] = []byte(`{"AdvertiseServices":["svc:test-pg"],"Config":{"NodeID":"node-foo"}}`) s.Data["profile-foo"] = []byte(`{"AdvertiseServices":["svc:test-pg"],"Config":{"NodeID":"node-foo"}}`)
}) })

@ -12,6 +12,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/coder/websocket/internal/errd from github.com/coder/websocket github.com/coder/websocket/internal/errd from github.com/coder/websocket
github.com/coder/websocket/internal/util from github.com/coder/websocket github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket github.com/coder/websocket/internal/xsync from github.com/coder/websocket
github.com/creachadair/msync/trigger from tailscale.com/logtail
💣 github.com/davecgh/go-spew/spew from k8s.io/apimachinery/pkg/util/dump 💣 github.com/davecgh/go-spew/spew from k8s.io/apimachinery/pkg/util/dump
W 💣 github.com/dblohm7/wingoes from tailscale.com/net/tshttpproxy+ W 💣 github.com/dblohm7/wingoes from tailscale.com/net/tshttpproxy+
W 💣 github.com/dblohm7/wingoes/com from tailscale.com/util/osdiag+ W 💣 github.com/dblohm7/wingoes/com from tailscale.com/util/osdiag+
@ -70,8 +71,9 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/klauspost/compress/fse from github.com/klauspost/compress/huff0 github.com/klauspost/compress/fse from github.com/klauspost/compress/huff0
github.com/klauspost/compress/huff0 from github.com/klauspost/compress/zstd github.com/klauspost/compress/huff0 from github.com/klauspost/compress/zstd
github.com/klauspost/compress/internal/cpuinfo from github.com/klauspost/compress/huff0+ github.com/klauspost/compress/internal/cpuinfo from github.com/klauspost/compress/huff0+
💣 github.com/klauspost/compress/internal/le from github.com/klauspost/compress/huff0+
github.com/klauspost/compress/internal/snapref from github.com/klauspost/compress/zstd github.com/klauspost/compress/internal/snapref from github.com/klauspost/compress/zstd
github.com/klauspost/compress/zstd from tailscale.com/util/zstdframe+ github.com/klauspost/compress/zstd from tailscale.com/util/zstdframe
github.com/klauspost/compress/zstd/internal/xxhash from github.com/klauspost/compress/zstd github.com/klauspost/compress/zstd/internal/xxhash from github.com/klauspost/compress/zstd
github.com/mailru/easyjson/buffer from github.com/mailru/easyjson/jwriter github.com/mailru/easyjson/buffer from github.com/mailru/easyjson/jwriter
💣 github.com/mailru/easyjson/jlexer from github.com/go-openapi/swag 💣 github.com/mailru/easyjson/jlexer from github.com/go-openapi/swag
@ -84,6 +86,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
💣 github.com/modern-go/reflect2 from github.com/json-iterator/go 💣 github.com/modern-go/reflect2 from github.com/json-iterator/go
github.com/munnerz/goautoneg from k8s.io/kube-openapi/pkg/handler3+ github.com/munnerz/goautoneg from k8s.io/kube-openapi/pkg/handler3+
github.com/opencontainers/go-digest from github.com/distribution/reference github.com/opencontainers/go-digest from github.com/distribution/reference
github.com/pires/go-proxyproto from tailscale.com/ipn/ipnlocal
github.com/pkg/errors from github.com/evanphx/json-patch/v5+ github.com/pkg/errors from github.com/evanphx/json-patch/v5+
D github.com/prometheus-community/pro-bing from tailscale.com/wgengine/netstack D github.com/prometheus-community/pro-bing from tailscale.com/wgengine/netstack
github.com/prometheus/client_golang/internal/github.com/golang/gddo/httputil from github.com/prometheus/client_golang/prometheus/promhttp github.com/prometheus/client_golang/internal/github.com/golang/gddo/httputil from github.com/prometheus/client_golang/prometheus/promhttp
@ -92,6 +95,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/prometheus/client_golang/prometheus/collectors from sigs.k8s.io/controller-runtime/pkg/internal/controller/metrics+ github.com/prometheus/client_golang/prometheus/collectors from sigs.k8s.io/controller-runtime/pkg/internal/controller/metrics+
github.com/prometheus/client_golang/prometheus/internal from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/client_golang/prometheus/internal from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/client_golang/prometheus/promhttp from sigs.k8s.io/controller-runtime/pkg/metrics/server+ github.com/prometheus/client_golang/prometheus/promhttp from sigs.k8s.io/controller-runtime/pkg/metrics/server+
github.com/prometheus/client_golang/prometheus/promhttp/internal from github.com/prometheus/client_golang/prometheus/promhttp
github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+
@ -178,10 +182,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
google.golang.org/protobuf/reflect/protoregistry from github.com/golang/protobuf/proto+ google.golang.org/protobuf/reflect/protoregistry from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoiface from github.com/golang/protobuf/proto+ google.golang.org/protobuf/runtime/protoiface from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoimpl from github.com/golang/protobuf/proto+ google.golang.org/protobuf/runtime/protoimpl from github.com/golang/protobuf/proto+
google.golang.org/protobuf/types/descriptorpb from github.com/google/gnostic-models/openapiv3+ 💣 google.golang.org/protobuf/types/descriptorpb from github.com/google/gnostic-models/openapiv3+
google.golang.org/protobuf/types/gofeaturespb from google.golang.org/protobuf/reflect/protodesc 💣 google.golang.org/protobuf/types/gofeaturespb from google.golang.org/protobuf/reflect/protodesc
google.golang.org/protobuf/types/known/anypb from github.com/google/gnostic-models/compiler+ 💣 google.golang.org/protobuf/types/known/anypb from github.com/google/gnostic-models/compiler+
google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+ 💣 google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
gopkg.in/evanphx/json-patch.v4 from k8s.io/client-go/testing gopkg.in/evanphx/json-patch.v4 from k8s.io/client-go/testing
gopkg.in/inf.v0 from k8s.io/apimachinery/pkg/api/resource gopkg.in/inf.v0 from k8s.io/apimachinery/pkg/api/resource
gopkg.in/yaml.v3 from github.com/go-openapi/swag+ gopkg.in/yaml.v3 from github.com/go-openapi/swag+
@ -723,9 +727,11 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/feature/buildfeatures from tailscale.com/wgengine/magicsock+ tailscale.com/feature/buildfeatures from tailscale.com/wgengine/magicsock+
tailscale.com/feature/c2n from tailscale.com/tsnet tailscale.com/feature/c2n from tailscale.com/tsnet
tailscale.com/feature/condlite/expvar from tailscale.com/wgengine/magicsock tailscale.com/feature/condlite/expvar from tailscale.com/wgengine/magicsock
tailscale.com/feature/condregister/identityfederation from tailscale.com/tsnet
tailscale.com/feature/condregister/oauthkey from tailscale.com/tsnet tailscale.com/feature/condregister/oauthkey from tailscale.com/tsnet
tailscale.com/feature/condregister/portmapper from tailscale.com/tsnet tailscale.com/feature/condregister/portmapper from tailscale.com/tsnet
tailscale.com/feature/condregister/useproxy from tailscale.com/tsnet tailscale.com/feature/condregister/useproxy from tailscale.com/tsnet
tailscale.com/feature/identityfederation from tailscale.com/feature/condregister/identityfederation
tailscale.com/feature/oauthkey from tailscale.com/feature/condregister/oauthkey tailscale.com/feature/oauthkey from tailscale.com/feature/condregister/oauthkey
tailscale.com/feature/portmapper from tailscale.com/feature/condregister/portmapper tailscale.com/feature/portmapper from tailscale.com/feature/condregister/portmapper
tailscale.com/feature/syspolicy from tailscale.com/logpolicy tailscale.com/feature/syspolicy from tailscale.com/logpolicy
@ -824,7 +830,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/tsweb from tailscale.com/util/eventbus tailscale.com/tsweb from tailscale.com/util/eventbus
tailscale.com/tsweb/varz from tailscale.com/util/usermetric+ tailscale.com/tsweb/varz from tailscale.com/util/usermetric+
tailscale.com/types/appctype from tailscale.com/ipn/ipnlocal+ tailscale.com/types/appctype from tailscale.com/ipn/ipnlocal+
tailscale.com/types/bools from tailscale.com/tsnet tailscale.com/types/bools from tailscale.com/tsnet+
tailscale.com/types/dnstype from tailscale.com/ipn/ipnlocal+ tailscale.com/types/dnstype from tailscale.com/ipn/ipnlocal+
tailscale.com/types/empty from tailscale.com/ipn+ tailscale.com/types/empty from tailscale.com/ipn+
tailscale.com/types/ipproto from tailscale.com/net/flowtrack+ tailscale.com/types/ipproto from tailscale.com/net/flowtrack+
@ -847,7 +853,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/types/views from tailscale.com/appc+ tailscale.com/types/views from tailscale.com/appc+
tailscale.com/util/backoff from tailscale.com/cmd/k8s-operator+ tailscale.com/util/backoff from tailscale.com/cmd/k8s-operator+
tailscale.com/util/checkchange from tailscale.com/ipn/ipnlocal+ tailscale.com/util/checkchange from tailscale.com/ipn/ipnlocal+
tailscale.com/util/cibuild from tailscale.com/health tailscale.com/util/cibuild from tailscale.com/health+
tailscale.com/util/clientmetric from tailscale.com/cmd/k8s-operator+ tailscale.com/util/clientmetric from tailscale.com/cmd/k8s-operator+
tailscale.com/util/cloudenv from tailscale.com/hostinfo+ tailscale.com/util/cloudenv from tailscale.com/hostinfo+
LW tailscale.com/util/cmpver from tailscale.com/net/dns+ LW tailscale.com/util/cmpver from tailscale.com/net/dns+
@ -995,7 +1001,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
crypto/ecdsa from crypto/tls+ crypto/ecdsa from crypto/tls+
crypto/ed25519 from crypto/tls+ crypto/ed25519 from crypto/tls+
crypto/elliptic from crypto/ecdsa+ crypto/elliptic from crypto/ecdsa+
crypto/fips140 from crypto/tls/internal/fips140tls crypto/fips140 from crypto/tls/internal/fips140tls+
crypto/hkdf from crypto/internal/hpke+ crypto/hkdf from crypto/internal/hpke+
crypto/hmac from crypto/tls+ crypto/hmac from crypto/tls+
crypto/internal/boring from crypto/aes+ crypto/internal/boring from crypto/aes+

@ -26,4 +26,4 @@ maintainers:
version: 0.1.0 version: 0.1.0
# appVersion will be set to Tailscale repo tag at release time. # appVersion will be set to Tailscale repo tag at release time.
appVersion: "unstable" appVersion: "stable"

@ -3,8 +3,8 @@
# If old setting used, enable both old (operator) and new (ProxyGroup) workflows. # If old setting used, enable both old (operator) and new (ProxyGroup) workflows.
# If new setting used, enable only new workflow. # If new setting used, enable only new workflow.
{{ if or (eq .Values.apiServerProxyConfig.mode "true") {{ if or (eq (toString .Values.apiServerProxyConfig.mode) "true")
(eq .Values.apiServerProxyConfig.allowImpersonation "true") }} (eq (toString .Values.apiServerProxyConfig.allowImpersonation) "true") }}
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
@ -25,7 +25,7 @@ kind: ClusterRoleBinding
metadata: metadata:
name: tailscale-auth-proxy name: tailscale-auth-proxy
subjects: subjects:
{{- if eq .Values.apiServerProxyConfig.mode "true" }} {{- if eq (toString .Values.apiServerProxyConfig.mode) "true" }}
- kind: ServiceAccount - kind: ServiceAccount
name: operator name: operator
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}

@ -34,7 +34,9 @@ spec:
securityContext: securityContext:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
{{- end }} {{- end }}
{{- if or .Values.oauth.clientSecret .Values.oauth.audience }}
volumes: volumes:
{{- if .Values.oauth.clientSecret }}
- name: oauth - name: oauth
{{- with .Values.oauthSecretVolume }} {{- with .Values.oauthSecretVolume }}
{{- toYaml . | nindent 10 }} {{- toYaml . | nindent 10 }}
@ -42,6 +44,17 @@ spec:
secret: secret:
secretName: operator-oauth secretName: operator-oauth
{{- end }} {{- end }}
{{- else }}
- name: oidc-jwt
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: {{ .Values.oauth.audience }}
expirationSeconds: 3600
path: token
{{- end }}
{{- end }}
containers: containers:
- name: operator - name: operator
{{- with .Values.operatorConfig.securityContext }} {{- with .Values.operatorConfig.securityContext }}
@ -72,10 +85,15 @@ spec:
value: {{ .Values.loginServer }} value: {{ .Values.loginServer }}
- name: OPERATOR_INGRESS_CLASS_NAME - name: OPERATOR_INGRESS_CLASS_NAME
value: {{ .Values.ingressClass.name }} value: {{ .Values.ingressClass.name }}
{{- if .Values.oauth.clientSecret }}
- name: CLIENT_ID_FILE - name: CLIENT_ID_FILE
value: /oauth/client_id value: /oauth/client_id
- name: CLIENT_SECRET_FILE - name: CLIENT_SECRET_FILE
value: /oauth/client_secret value: /oauth/client_secret
{{- else if .Values.oauth.audience }}
- name: CLIENT_ID
value: {{ .Values.oauth.clientId }}
{{- end }}
{{- $proxyTag := printf ":%s" ( .Values.proxyConfig.image.tag | default .Chart.AppVersion )}} {{- $proxyTag := printf ":%s" ( .Values.proxyConfig.image.tag | default .Chart.AppVersion )}}
- name: PROXY_IMAGE - name: PROXY_IMAGE
value: {{ coalesce .Values.proxyConfig.image.repo .Values.proxyConfig.image.repository }}{{- if .Values.proxyConfig.image.digest -}}{{ printf "@%s" .Values.proxyConfig.image.digest}}{{- else -}}{{ printf "%s" $proxyTag }}{{- end }} value: {{ coalesce .Values.proxyConfig.image.repo .Values.proxyConfig.image.repository }}{{- if .Values.proxyConfig.image.digest -}}{{ printf "@%s" .Values.proxyConfig.image.digest}}{{- else -}}{{ printf "%s" $proxyTag }}{{- end }}
@ -100,10 +118,18 @@ spec:
{{- with .Values.operatorConfig.extraEnv }} {{- with .Values.operatorConfig.extraEnv }}
{{- toYaml . | nindent 12 }} {{- toYaml . | nindent 12 }}
{{- end }} {{- end }}
{{- if or .Values.oauth.clientSecret .Values.oauth.audience }}
volumeMounts: volumeMounts:
{{- if .Values.oauth.clientSecret }}
- name: oauth - name: oauth
mountPath: /oauth mountPath: /oauth
readOnly: true readOnly: true
{{- else }}
- name: oidc-jwt
mountPath: /var/run/secrets/tailscale/serviceaccount
readOnly: true
{{- end }}
{{- end }}
{{- with .Values.operatorConfig.nodeSelector }} {{- with .Values.operatorConfig.nodeSelector }}
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}

@ -1,7 +1,7 @@
# Copyright (c) Tailscale Inc & AUTHORS # Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause # SPDX-License-Identifier: BSD-3-Clause
{{ if and .Values.oauth .Values.oauth.clientId -}} {{ if and .Values.oauth .Values.oauth.clientId .Values.oauth.clientSecret -}}
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:

@ -1,13 +1,20 @@
# Copyright (c) Tailscale Inc & AUTHORS # Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause # SPDX-License-Identifier: BSD-3-Clause
# Operator oauth credentials. If set a Kubernetes Secret with the provided # Operator oauth credentials. If unset a Secret named operator-oauth must be
# values will be created in the operator namespace. If unset a Secret named # precreated or oauthSecretVolume needs to be adjusted. This block will be
# operator-oauth must be precreated or oauthSecretVolume needs to be adjusted. # overridden by oauthSecretVolume, if set.
# This block will be overridden by oauthSecretVolume, if set. oauth:
oauth: {} # The Client ID the operator will authenticate with.
# clientId: "" clientId: ""
# clientSecret: "" # If set a Kubernetes Secret with the provided value will be created in
# the operator namespace, and mounted into the operator Pod. Takes precedence
# over oauth.audience.
clientSecret: ""
# The audience for oauth.clientId if using a workload identity federation
# OAuth client. Mutually exclusive with oauth.clientSecret.
# See https://tailscale.com/kb/1581/workload-identity-federation.
audience: ""
# URL of the control plane to be used by all resources managed by the operator. # URL of the control plane to be used by all resources managed by the operator.
loginServer: "" loginServer: ""

@ -68,6 +68,11 @@ spec:
Corresponds to --ui tsrecorder flag https://tailscale.com/kb/1246/tailscale-ssh-session-recording#deploy-a-recorder-node. Corresponds to --ui tsrecorder flag https://tailscale.com/kb/1246/tailscale-ssh-session-recording#deploy-a-recorder-node.
Required if S3 storage is not set up, to ensure that recordings are accessible. Required if S3 storage is not set up, to ensure that recordings are accessible.
type: boolean type: boolean
replicas:
description: Replicas specifies how many instances of tsrecorder to run. Defaults to 1.
type: integer
format: int32
minimum: 0
statefulSet: statefulSet:
description: |- description: |-
Configuration parameters for the Recorder's StatefulSet. The operator Configuration parameters for the Recorder's StatefulSet. The operator
@ -1683,6 +1688,9 @@ spec:
items: items:
type: string type: string
pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$ pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$
x-kubernetes-validations:
- rule: '!(self.replicas > 1 && (!has(self.storage) || !has(self.storage.s3)))'
message: S3 storage must be used when deploying multiple Recorder replicas
status: status:
description: |- description: |-
RecorderStatus describes the status of the recorder. This is set RecorderStatus describes the status of the recorder. This is set

@ -3348,6 +3348,11 @@ spec:
Corresponds to --ui tsrecorder flag https://tailscale.com/kb/1246/tailscale-ssh-session-recording#deploy-a-recorder-node. Corresponds to --ui tsrecorder flag https://tailscale.com/kb/1246/tailscale-ssh-session-recording#deploy-a-recorder-node.
Required if S3 storage is not set up, to ensure that recordings are accessible. Required if S3 storage is not set up, to ensure that recordings are accessible.
type: boolean type: boolean
replicas:
description: Replicas specifies how many instances of tsrecorder to run. Defaults to 1.
format: int32
minimum: 0
type: integer
statefulSet: statefulSet:
description: |- description: |-
Configuration parameters for the Recorder's StatefulSet. The operator Configuration parameters for the Recorder's StatefulSet. The operator
@ -4964,6 +4969,9 @@ spec:
type: string type: string
type: array type: array
type: object type: object
x-kubernetes-validations:
- message: S3 storage must be used when deploying multiple Recorder replicas
rule: '!(self.replicas > 1 && (!has(self.storage) || !has(self.storage.s3)))'
status: status:
description: |- description: |-
RecorderStatus describes the status of the recorder. This is set RecorderStatus describes the status of the recorder. This is set
@ -5366,7 +5374,7 @@ spec:
- name: CLIENT_SECRET_FILE - name: CLIENT_SECRET_FILE
value: /oauth/client_secret value: /oauth/client_secret
- name: PROXY_IMAGE - name: PROXY_IMAGE
value: tailscale/tailscale:unstable value: tailscale/tailscale:stable
- name: PROXY_TAGS - name: PROXY_TAGS
value: tag:k8s value: tag:k8s
- name: APISERVER_PROXY - name: APISERVER_PROXY
@ -5381,7 +5389,7 @@ spec:
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: metadata.uid fieldPath: metadata.uid
image: tailscale/k8s-operator:unstable image: tailscale/k8s-operator:stable
imagePullPolicy: Always imagePullPolicy: Always
name: operator name: operator
volumeMounts: volumeMounts:

@ -36,21 +36,21 @@ type egressEpsReconciler struct {
// It compares tailnet service state stored in egress proxy state Secrets by containerboot with the desired // It compares tailnet service state stored in egress proxy state Secrets by containerboot with the desired
// configuration stored in proxy-cfg ConfigMap to determine if the endpoint is ready. // configuration stored in proxy-cfg ConfigMap to determine if the endpoint is ready.
func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) { func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := er.logger.With("Service", req.NamespacedName) lg := er.logger.With("Service", req.NamespacedName)
l.Debugf("starting reconcile") lg.Debugf("starting reconcile")
defer l.Debugf("reconcile finished") defer lg.Debugf("reconcile finished")
eps := new(discoveryv1.EndpointSlice) eps := new(discoveryv1.EndpointSlice)
err = er.Get(ctx, req.NamespacedName, eps) err = er.Get(ctx, req.NamespacedName, eps)
if apierrors.IsNotFound(err) { if apierrors.IsNotFound(err) {
l.Debugf("EndpointSlice not found") lg.Debugf("EndpointSlice not found")
return reconcile.Result{}, nil return reconcile.Result{}, nil
} }
if err != nil { if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get EndpointSlice: %w", err) return reconcile.Result{}, fmt.Errorf("failed to get EndpointSlice: %w", err)
} }
if !eps.DeletionTimestamp.IsZero() { if !eps.DeletionTimestamp.IsZero() {
l.Debugf("EnpointSlice is being deleted") lg.Debugf("EnpointSlice is being deleted")
return res, nil return res, nil
} }
@ -64,7 +64,7 @@ func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Requ
} }
err = er.Get(ctx, client.ObjectKeyFromObject(svc), svc) err = er.Get(ctx, client.ObjectKeyFromObject(svc), svc)
if apierrors.IsNotFound(err) { if apierrors.IsNotFound(err) {
l.Infof("ExternalName Service %s/%s not found, perhaps it was deleted", svc.Namespace, svc.Name) lg.Infof("ExternalName Service %s/%s not found, perhaps it was deleted", svc.Namespace, svc.Name)
return res, nil return res, nil
} }
if err != nil { if err != nil {
@ -77,7 +77,7 @@ func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Requ
oldEps := eps.DeepCopy() oldEps := eps.DeepCopy()
tailnetSvc := tailnetSvcName(svc) tailnetSvc := tailnetSvcName(svc)
l = l.With("tailnet-service-name", tailnetSvc) lg = lg.With("tailnet-service-name", tailnetSvc)
// Retrieve the desired tailnet service configuration from the ConfigMap. // Retrieve the desired tailnet service configuration from the ConfigMap.
proxyGroupName := eps.Labels[labelProxyGroup] proxyGroupName := eps.Labels[labelProxyGroup]
@ -88,12 +88,12 @@ func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Requ
if cfgs == nil { if cfgs == nil {
// TODO(irbekrm): this path would be hit if egress service was once exposed on a ProxyGroup that later // TODO(irbekrm): this path would be hit if egress service was once exposed on a ProxyGroup that later
// got deleted. Probably the EndpointSlices then need to be deleted too- need to rethink this flow. // got deleted. Probably the EndpointSlices then need to be deleted too- need to rethink this flow.
l.Debugf("No egress config found, likely because ProxyGroup has not been created") lg.Debugf("No egress config found, likely because ProxyGroup has not been created")
return res, nil return res, nil
} }
cfg, ok := (*cfgs)[tailnetSvc] cfg, ok := (*cfgs)[tailnetSvc]
if !ok { if !ok {
l.Infof("[unexpected] configuration for tailnet service %s not found", tailnetSvc) lg.Infof("[unexpected] configuration for tailnet service %s not found", tailnetSvc)
return res, nil return res, nil
} }
@ -105,7 +105,7 @@ func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Requ
} }
newEndpoints := make([]discoveryv1.Endpoint, 0) newEndpoints := make([]discoveryv1.Endpoint, 0)
for _, pod := range podList.Items { for _, pod := range podList.Items {
ready, err := er.podIsReadyToRouteTraffic(ctx, pod, &cfg, tailnetSvc, l) ready, err := er.podIsReadyToRouteTraffic(ctx, pod, &cfg, tailnetSvc, lg)
if err != nil { if err != nil {
return res, fmt.Errorf("error verifying if Pod is ready to route traffic: %w", err) return res, fmt.Errorf("error verifying if Pod is ready to route traffic: %w", err)
} }
@ -130,7 +130,7 @@ func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Requ
// run a cleanup for deleted Pods etc. // run a cleanup for deleted Pods etc.
eps.Endpoints = newEndpoints eps.Endpoints = newEndpoints
if !reflect.DeepEqual(eps, oldEps) { if !reflect.DeepEqual(eps, oldEps) {
l.Infof("Updating EndpointSlice to ensure traffic is routed to ready proxy Pods") lg.Infof("Updating EndpointSlice to ensure traffic is routed to ready proxy Pods")
if err := er.Update(ctx, eps); err != nil { if err := er.Update(ctx, eps); err != nil {
return res, fmt.Errorf("error updating EndpointSlice: %w", err) return res, fmt.Errorf("error updating EndpointSlice: %w", err)
} }
@ -154,11 +154,11 @@ func podIPv4(pod *corev1.Pod) (string, error) {
// podIsReadyToRouteTraffic returns true if it appears that the proxy Pod has configured firewall rules to be able to // podIsReadyToRouteTraffic returns true if it appears that the proxy Pod has configured firewall rules to be able to
// route traffic to the given tailnet service. It retrieves the proxy's state Secret and compares the tailnet service // route traffic to the given tailnet service. It retrieves the proxy's state Secret and compares the tailnet service
// status written there to the desired service configuration. // status written there to the desired service configuration.
func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod corev1.Pod, cfg *egressservices.Config, tailnetSvcName string, l *zap.SugaredLogger) (bool, error) { func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod corev1.Pod, cfg *egressservices.Config, tailnetSvcName string, lg *zap.SugaredLogger) (bool, error) {
l = l.With("proxy_pod", pod.Name) lg = lg.With("proxy_pod", pod.Name)
l.Debugf("checking whether proxy is ready to route to egress service") lg.Debugf("checking whether proxy is ready to route to egress service")
if !pod.DeletionTimestamp.IsZero() { if !pod.DeletionTimestamp.IsZero() {
l.Debugf("proxy Pod is being deleted, ignore") lg.Debugf("proxy Pod is being deleted, ignore")
return false, nil return false, nil
} }
podIP, err := podIPv4(&pod) podIP, err := podIPv4(&pod)
@ -166,7 +166,7 @@ func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod
return false, fmt.Errorf("error determining Pod IP address: %v", err) return false, fmt.Errorf("error determining Pod IP address: %v", err)
} }
if podIP == "" { if podIP == "" {
l.Infof("[unexpected] Pod does not have an IPv4 address, and IPv6 is not currently supported") lg.Infof("[unexpected] Pod does not have an IPv4 address, and IPv6 is not currently supported")
return false, nil return false, nil
} }
stateS := &corev1.Secret{ stateS := &corev1.Secret{
@ -177,7 +177,7 @@ func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod
} }
err = er.Get(ctx, client.ObjectKeyFromObject(stateS), stateS) err = er.Get(ctx, client.ObjectKeyFromObject(stateS), stateS)
if apierrors.IsNotFound(err) { if apierrors.IsNotFound(err) {
l.Debugf("proxy does not have a state Secret, waiting...") lg.Debugf("proxy does not have a state Secret, waiting...")
return false, nil return false, nil
} }
if err != nil { if err != nil {
@ -185,7 +185,7 @@ func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod
} }
svcStatusBS := stateS.Data[egressservices.KeyEgressServices] svcStatusBS := stateS.Data[egressservices.KeyEgressServices]
if len(svcStatusBS) == 0 { if len(svcStatusBS) == 0 {
l.Debugf("proxy's state Secret does not contain egress services status, waiting...") lg.Debugf("proxy's state Secret does not contain egress services status, waiting...")
return false, nil return false, nil
} }
svcStatus := &egressservices.Status{} svcStatus := &egressservices.Status{}
@ -193,22 +193,22 @@ func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod
return false, fmt.Errorf("error unmarshalling egress service status: %w", err) return false, fmt.Errorf("error unmarshalling egress service status: %w", err)
} }
if !strings.EqualFold(podIP, svcStatus.PodIPv4) { if !strings.EqualFold(podIP, svcStatus.PodIPv4) {
l.Infof("proxy's egress service status is for Pod IP %s, current proxy's Pod IP %s, waiting for the proxy to reconfigure...", svcStatus.PodIPv4, podIP) lg.Infof("proxy's egress service status is for Pod IP %s, current proxy's Pod IP %s, waiting for the proxy to reconfigure...", svcStatus.PodIPv4, podIP)
return false, nil return false, nil
} }
st, ok := (*svcStatus).Services[tailnetSvcName] st, ok := (*svcStatus).Services[tailnetSvcName]
if !ok { if !ok {
l.Infof("proxy's state Secret does not have egress service status, waiting...") lg.Infof("proxy's state Secret does not have egress service status, waiting...")
return false, nil return false, nil
} }
if !reflect.DeepEqual(cfg.TailnetTarget, st.TailnetTarget) { if !reflect.DeepEqual(cfg.TailnetTarget, st.TailnetTarget) {
l.Infof("proxy has configured egress service for tailnet target %v, current target is %v, waiting for proxy to reconfigure...", st.TailnetTarget, cfg.TailnetTarget) lg.Infof("proxy has configured egress service for tailnet target %v, current target is %v, waiting for proxy to reconfigure...", st.TailnetTarget, cfg.TailnetTarget)
return false, nil return false, nil
} }
if !reflect.DeepEqual(cfg.Ports, st.Ports) { if !reflect.DeepEqual(cfg.Ports, st.Ports) {
l.Debugf("proxy has configured egress service for ports %#+v, wants ports %#+v, waiting for proxy to reconfigure", st.Ports, cfg.Ports) lg.Debugf("proxy has configured egress service for ports %#+v, wants ports %#+v, waiting for proxy to reconfigure", st.Ports, cfg.Ports)
return false, nil return false, nil
} }
l.Debugf("proxy is ready to route traffic to egress service") lg.Debugf("proxy is ready to route traffic to egress service")
return true, nil return true, nil
} }

@ -71,9 +71,9 @@ type egressPodsReconciler struct {
// If the Pod does not appear to be serving the health check endpoint (pre-v1.80 proxies), the reconciler just sets the // If the Pod does not appear to be serving the health check endpoint (pre-v1.80 proxies), the reconciler just sets the
// readiness condition for backwards compatibility reasons. // readiness condition for backwards compatibility reasons.
func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) { func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := er.logger.With("Pod", req.NamespacedName) lg := er.logger.With("Pod", req.NamespacedName)
l.Debugf("starting reconcile") lg.Debugf("starting reconcile")
defer l.Debugf("reconcile finished") defer lg.Debugf("reconcile finished")
pod := new(corev1.Pod) pod := new(corev1.Pod)
err = er.Get(ctx, req.NamespacedName, pod) err = er.Get(ctx, req.NamespacedName, pod)
@ -84,11 +84,11 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
return reconcile.Result{}, fmt.Errorf("failed to get Pod: %w", err) return reconcile.Result{}, fmt.Errorf("failed to get Pod: %w", err)
} }
if !pod.DeletionTimestamp.IsZero() { if !pod.DeletionTimestamp.IsZero() {
l.Debugf("Pod is being deleted, do nothing") lg.Debugf("Pod is being deleted, do nothing")
return res, nil return res, nil
} }
if pod.Labels[LabelParentType] != proxyTypeProxyGroup { if pod.Labels[LabelParentType] != proxyTypeProxyGroup {
l.Infof("[unexpected] reconciler called for a Pod that is not a ProxyGroup Pod") lg.Infof("[unexpected] reconciler called for a Pod that is not a ProxyGroup Pod")
return res, nil return res, nil
} }
@ -97,7 +97,7 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
if !slices.ContainsFunc(pod.Spec.ReadinessGates, func(r corev1.PodReadinessGate) bool { if !slices.ContainsFunc(pod.Spec.ReadinessGates, func(r corev1.PodReadinessGate) bool {
return r.ConditionType == tsEgressReadinessGate return r.ConditionType == tsEgressReadinessGate
}) { }) {
l.Debug("Pod does not have egress readiness gate set, skipping") lg.Debug("Pod does not have egress readiness gate set, skipping")
return res, nil return res, nil
} }
@ -107,7 +107,7 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
return res, fmt.Errorf("error getting ProxyGroup %q: %w", proxyGroupName, err) return res, fmt.Errorf("error getting ProxyGroup %q: %w", proxyGroupName, err)
} }
if pg.Spec.Type != typeEgress { if pg.Spec.Type != typeEgress {
l.Infof("[unexpected] reconciler called for %q ProxyGroup Pod", pg.Spec.Type) lg.Infof("[unexpected] reconciler called for %q ProxyGroup Pod", pg.Spec.Type)
return res, nil return res, nil
} }
// Get all ClusterIP Services for all egress targets exposed to cluster via this ProxyGroup. // Get all ClusterIP Services for all egress targets exposed to cluster via this ProxyGroup.
@ -125,7 +125,7 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
return c.Type == tsEgressReadinessGate return c.Type == tsEgressReadinessGate
}) })
if idx != -1 { if idx != -1 {
l.Debugf("Pod is already ready, do nothing") lg.Debugf("Pod is already ready, do nothing")
return res, nil return res, nil
} }
@ -134,7 +134,7 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
for _, svc := range svcs.Items { for _, svc := range svcs.Items {
s := svc s := svc
go func() { go func() {
ll := l.With("service_name", s.Name) ll := lg.With("service_name", s.Name)
d := retrieveClusterDomain(er.tsNamespace, ll) d := retrieveClusterDomain(er.tsNamespace, ll)
healthCheckAddr := healthCheckForSvc(&s, d) healthCheckAddr := healthCheckForSvc(&s, d)
if healthCheckAddr == "" { if healthCheckAddr == "" {
@ -175,25 +175,25 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
err = errors.Join(err, e) err = errors.Join(err, e)
} }
if err != nil { if err != nil {
return res, fmt.Errorf("error verifying conectivity: %w", err) return res, fmt.Errorf("error verifying connectivity: %w", err)
} }
if rm := routesMissing.Load(); rm { if rm := routesMissing.Load(); rm {
l.Info("Pod is not yet added as an endpoint for all egress targets, waiting...") lg.Info("Pod is not yet added as an endpoint for all egress targets, waiting...")
return reconcile.Result{RequeueAfter: shortRequeue}, nil return reconcile.Result{RequeueAfter: shortRequeue}, nil
} }
if err := er.setPodReady(ctx, pod, l); err != nil { if err := er.setPodReady(ctx, pod, lg); err != nil {
return res, fmt.Errorf("error setting Pod as ready: %w", err) return res, fmt.Errorf("error setting Pod as ready: %w", err)
} }
return res, nil return res, nil
} }
func (er *egressPodsReconciler) setPodReady(ctx context.Context, pod *corev1.Pod, l *zap.SugaredLogger) error { func (er *egressPodsReconciler) setPodReady(ctx context.Context, pod *corev1.Pod, lg *zap.SugaredLogger) error {
if slices.ContainsFunc(pod.Status.Conditions, func(c corev1.PodCondition) bool { if slices.ContainsFunc(pod.Status.Conditions, func(c corev1.PodCondition) bool {
return c.Type == tsEgressReadinessGate return c.Type == tsEgressReadinessGate
}) { }) {
return nil return nil
} }
l.Infof("Pod is ready to route traffic to all egress targets") lg.Infof("Pod is ready to route traffic to all egress targets")
pod.Status.Conditions = append(pod.Status.Conditions, corev1.PodCondition{ pod.Status.Conditions = append(pod.Status.Conditions, corev1.PodCondition{
Type: tsEgressReadinessGate, Type: tsEgressReadinessGate,
Status: corev1.ConditionTrue, Status: corev1.ConditionTrue,
@ -216,11 +216,11 @@ const (
) )
// lookupPodRouteViaSvc attempts to reach a Pod using a health check endpoint served by a Service and returns the state of the health check. // lookupPodRouteViaSvc attempts to reach a Pod using a health check endpoint served by a Service and returns the state of the health check.
func (er *egressPodsReconciler) lookupPodRouteViaSvc(ctx context.Context, pod *corev1.Pod, healthCheckAddr string, l *zap.SugaredLogger) (healthCheckState, error) { func (er *egressPodsReconciler) lookupPodRouteViaSvc(ctx context.Context, pod *corev1.Pod, healthCheckAddr string, lg *zap.SugaredLogger) (healthCheckState, error) {
if !slices.ContainsFunc(pod.Spec.Containers[0].Env, func(e corev1.EnvVar) bool { if !slices.ContainsFunc(pod.Spec.Containers[0].Env, func(e corev1.EnvVar) bool {
return e.Name == "TS_ENABLE_HEALTH_CHECK" && e.Value == "true" return e.Name == "TS_ENABLE_HEALTH_CHECK" && e.Value == "true"
}) { }) {
l.Debugf("Pod does not have health check enabled, unable to verify if it is currently routable via Service") lg.Debugf("Pod does not have health check enabled, unable to verify if it is currently routable via Service")
return cannotVerify, nil return cannotVerify, nil
} }
wantsIP, err := podIPv4(pod) wantsIP, err := podIPv4(pod)
@ -241,14 +241,14 @@ func (er *egressPodsReconciler) lookupPodRouteViaSvc(ctx context.Context, pod *c
req.Close = true req.Close = true
resp, err := er.httpClient.Do(req) resp, err := er.httpClient.Do(req)
if err != nil { if err != nil {
// This is most likely because this is the first Pod and is not yet added to Service endoints. Other // This is most likely because this is the first Pod and is not yet added to service endpoints. Other
// error types are possible, but checking for those would likely make the system too fragile. // error types are possible, but checking for those would likely make the system too fragile.
return unreachable, nil return unreachable, nil
} }
defer resp.Body.Close() defer resp.Body.Close()
gotIP := resp.Header.Get(kubetypes.PodIPv4Header) gotIP := resp.Header.Get(kubetypes.PodIPv4Header)
if gotIP == "" { if gotIP == "" {
l.Debugf("Health check does not return Pod's IP header, unable to verify if Pod is currently routable via Service") lg.Debugf("Health check does not return Pod's IP header, unable to verify if Pod is currently routable via Service")
return cannotVerify, nil return cannotVerify, nil
} }
if !strings.EqualFold(wantsIP, gotIP) { if !strings.EqualFold(wantsIP, gotIP) {

@ -47,13 +47,13 @@ type egressSvcsReadinessReconciler struct {
// route traffic to the target. It compares proxy Pod IPs with the endpoints set on the EndpointSlice for the egress // route traffic to the target. It compares proxy Pod IPs with the endpoints set on the EndpointSlice for the egress
// service to determine how many replicas are currently able to route traffic. // service to determine how many replicas are currently able to route traffic.
func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) { func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := esrr.logger.With("Service", req.NamespacedName) lg := esrr.logger.With("Service", req.NamespacedName)
l.Debugf("starting reconcile") lg.Debugf("starting reconcile")
defer l.Debugf("reconcile finished") defer lg.Debugf("reconcile finished")
svc := new(corev1.Service) svc := new(corev1.Service)
if err = esrr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) { if err = esrr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) {
l.Debugf("Service not found") lg.Debugf("Service not found")
return res, nil return res, nil
} else if err != nil { } else if err != nil {
return res, fmt.Errorf("failed to get Service: %w", err) return res, fmt.Errorf("failed to get Service: %w", err)
@ -64,7 +64,7 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
) )
oldStatus := svc.Status.DeepCopy() oldStatus := svc.Status.DeepCopy()
defer func() { defer func() {
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, st, reason, msg, esrr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, st, reason, msg, esrr.clock, lg)
if !apiequality.Semantic.DeepEqual(oldStatus, &svc.Status) { if !apiequality.Semantic.DeepEqual(oldStatus, &svc.Status) {
err = errors.Join(err, esrr.Status().Update(ctx, svc)) err = errors.Join(err, esrr.Status().Update(ctx, svc))
} }
@ -79,7 +79,7 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
return res, err return res, err
} }
if eps == nil { if eps == nil {
l.Infof("EndpointSlice for Service does not yet exist, waiting...") lg.Infof("EndpointSlice for Service does not yet exist, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse st = metav1.ConditionFalse
return res, nil return res, nil
@ -91,7 +91,7 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
} }
err = esrr.Get(ctx, client.ObjectKeyFromObject(pg), pg) err = esrr.Get(ctx, client.ObjectKeyFromObject(pg), pg)
if apierrors.IsNotFound(err) { if apierrors.IsNotFound(err) {
l.Infof("ProxyGroup for Service does not exist, waiting...") lg.Infof("ProxyGroup for Service does not exist, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse st = metav1.ConditionFalse
return res, nil return res, nil
@ -103,7 +103,7 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
return res, err return res, err
} }
if !tsoperator.ProxyGroupAvailable(pg) { if !tsoperator.ProxyGroupAvailable(pg) {
l.Infof("ProxyGroup for Service is not ready, waiting...") lg.Infof("ProxyGroup for Service is not ready, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse st = metav1.ConditionFalse
return res, nil return res, nil
@ -111,7 +111,7 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
replicas := pgReplicas(pg) replicas := pgReplicas(pg)
if replicas == 0 { if replicas == 0 {
l.Infof("ProxyGroup replicas set to 0") lg.Infof("ProxyGroup replicas set to 0")
reason, msg = reasonNoProxies, reasonNoProxies reason, msg = reasonNoProxies, reasonNoProxies
st = metav1.ConditionFalse st = metav1.ConditionFalse
return res, nil return res, nil
@ -128,16 +128,16 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
return res, err return res, err
} }
if pod == nil { if pod == nil {
l.Warnf("[unexpected] ProxyGroup is ready, but replica %d was not found", i) lg.Warnf("[unexpected] ProxyGroup is ready, but replica %d was not found", i)
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
return res, nil return res, nil
} }
l.Debugf("looking at Pod with IPs %v", pod.Status.PodIPs) lg.Debugf("looking at Pod with IPs %v", pod.Status.PodIPs)
ready := false ready := false
for _, ep := range eps.Endpoints { for _, ep := range eps.Endpoints {
l.Debugf("looking at endpoint with addresses %v", ep.Addresses) lg.Debugf("looking at endpoint with addresses %v", ep.Addresses)
if endpointReadyForPod(&ep, pod, l) { if endpointReadyForPod(&ep, pod, lg) {
l.Debugf("endpoint is ready for Pod") lg.Debugf("endpoint is ready for Pod")
ready = true ready = true
break break
} }
@ -163,10 +163,10 @@ func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req re
// endpointReadyForPod returns true if the endpoint is for the Pod's IPv4 address and is ready to serve traffic. // endpointReadyForPod returns true if the endpoint is for the Pod's IPv4 address and is ready to serve traffic.
// Endpoint must not be nil. // Endpoint must not be nil.
func endpointReadyForPod(ep *discoveryv1.Endpoint, pod *corev1.Pod, l *zap.SugaredLogger) bool { func endpointReadyForPod(ep *discoveryv1.Endpoint, pod *corev1.Pod, lg *zap.SugaredLogger) bool {
podIP, err := podIPv4(pod) podIP, err := podIPv4(pod)
if err != nil { if err != nil {
l.Warnf("[unexpected] error retrieving Pod's IPv4 address: %v", err) lg.Warnf("[unexpected] error retrieving Pod's IPv4 address: %v", err)
return false return false
} }
// Currently we only ever set a single address on and Endpoint and nothing else is meant to modify this. // Currently we only ever set a single address on and Endpoint and nothing else is meant to modify this.

@ -49,12 +49,12 @@ func TestEgressServiceReadiness(t *testing.T) {
}, },
} }
fakeClusterIPSvc := &corev1.Service{ObjectMeta: metav1.ObjectMeta{Name: "my-app", Namespace: "operator-ns"}} fakeClusterIPSvc := &corev1.Service{ObjectMeta: metav1.ObjectMeta{Name: "my-app", Namespace: "operator-ns"}}
l := egressSvcEpsLabels(egressSvc, fakeClusterIPSvc) labels := egressSvcEpsLabels(egressSvc, fakeClusterIPSvc)
eps := &discoveryv1.EndpointSlice{ eps := &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: "my-app", Name: "my-app",
Namespace: "operator-ns", Namespace: "operator-ns",
Labels: l, Labels: labels,
}, },
AddressType: discoveryv1.AddressTypeIPv4, AddressType: discoveryv1.AddressTypeIPv4,
} }
@ -118,26 +118,26 @@ func TestEgressServiceReadiness(t *testing.T) {
}) })
} }
func setClusterNotReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger) { func setClusterNotReady(svc *corev1.Service, cl tstime.Clock, lg *zap.SugaredLogger) {
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonClusterResourcesNotReady, reasonClusterResourcesNotReady, cl, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonClusterResourcesNotReady, reasonClusterResourcesNotReady, cl, lg)
} }
func setNotReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger, replicas int32) { func setNotReady(svc *corev1.Service, cl tstime.Clock, lg *zap.SugaredLogger, replicas int32) {
msg := fmt.Sprintf(msgReadyToRouteTemplate, 0, replicas) msg := fmt.Sprintf(msgReadyToRouteTemplate, 0, replicas)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonNotReady, msg, cl, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonNotReady, msg, cl, lg)
} }
func setReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger, replicas, readyReplicas int32) { func setReady(svc *corev1.Service, cl tstime.Clock, lg *zap.SugaredLogger, replicas, readyReplicas int32) {
reason := reasonPartiallyReady reason := reasonPartiallyReady
if readyReplicas == replicas { if readyReplicas == replicas {
reason = reasonReady reason = reasonReady
} }
msg := fmt.Sprintf(msgReadyToRouteTemplate, readyReplicas, replicas) msg := fmt.Sprintf(msgReadyToRouteTemplate, readyReplicas, replicas)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionTrue, reason, msg, cl, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionTrue, reason, msg, cl, lg)
} }
func setPGReady(pg *tsapi.ProxyGroup, cl tstime.Clock, l *zap.SugaredLogger) { func setPGReady(pg *tsapi.ProxyGroup, cl tstime.Clock, lg *zap.SugaredLogger) {
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionTrue, "foo", "foo", pg.Generation, cl, l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionTrue, "foo", "foo", pg.Generation, cl, lg)
} }
func setEndpointForReplica(pg *tsapi.ProxyGroup, ordinal int32, eps *discoveryv1.EndpointSlice) { func setEndpointForReplica(pg *tsapi.ProxyGroup, ordinal int32, eps *discoveryv1.EndpointSlice) {
@ -153,14 +153,14 @@ func setEndpointForReplica(pg *tsapi.ProxyGroup, ordinal int32, eps *discoveryv1
} }
func pod(pg *tsapi.ProxyGroup, ordinal int32) *corev1.Pod { func pod(pg *tsapi.ProxyGroup, ordinal int32) *corev1.Pod {
l := pgLabels(pg.Name, nil) labels := pgLabels(pg.Name, nil)
l[appsv1.PodIndexLabel] = fmt.Sprintf("%d", ordinal) labels[appsv1.PodIndexLabel] = fmt.Sprintf("%d", ordinal)
ip := fmt.Sprintf("10.0.0.%d", ordinal) ip := fmt.Sprintf("10.0.0.%d", ordinal)
return &corev1.Pod{ return &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%d", pg.Name, ordinal), Name: fmt.Sprintf("%s-%d", pg.Name, ordinal),
Namespace: "operator-ns", Namespace: "operator-ns",
Labels: l, Labels: labels,
}, },
Status: corev1.PodStatus{ Status: corev1.PodStatus{
PodIPs: []corev1.PodIP{{IP: ip}}, PodIPs: []corev1.PodIP{{IP: ip}},

@ -98,12 +98,12 @@ type egressSvcsReconciler struct {
// - updates the egress service config in a ConfigMap mounted to the ProxyGroup proxies with the tailnet target and the // - updates the egress service config in a ConfigMap mounted to the ProxyGroup proxies with the tailnet target and the
// portmappings. // portmappings.
func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) { func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := esr.logger.With("Service", req.NamespacedName) lg := esr.logger.With("Service", req.NamespacedName)
defer l.Info("reconcile finished") defer lg.Info("reconcile finished")
svc := new(corev1.Service) svc := new(corev1.Service)
if err = esr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) { if err = esr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) {
l.Info("Service not found") lg.Info("Service not found")
return res, nil return res, nil
} else if err != nil { } else if err != nil {
return res, fmt.Errorf("failed to get Service: %w", err) return res, fmt.Errorf("failed to get Service: %w", err)
@ -111,7 +111,7 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
// Name of the 'egress service', meaning the tailnet target. // Name of the 'egress service', meaning the tailnet target.
tailnetSvc := tailnetSvcName(svc) tailnetSvc := tailnetSvcName(svc)
l = l.With("tailnet-service", tailnetSvc) lg = lg.With("tailnet-service", tailnetSvc)
// Note that resources for egress Services are only cleaned up when the // Note that resources for egress Services are only cleaned up when the
// Service is actually deleted (and not if, for example, user decides to // Service is actually deleted (and not if, for example, user decides to
@ -119,8 +119,8 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
// assume that the egress ExternalName Services are always created for // assume that the egress ExternalName Services are always created for
// Tailscale operator specifically. // Tailscale operator specifically.
if !svc.DeletionTimestamp.IsZero() { if !svc.DeletionTimestamp.IsZero() {
l.Info("Service is being deleted, ensuring resource cleanup") lg.Info("Service is being deleted, ensuring resource cleanup")
return res, esr.maybeCleanup(ctx, svc, l) return res, esr.maybeCleanup(ctx, svc, lg)
} }
oldStatus := svc.Status.DeepCopy() oldStatus := svc.Status.DeepCopy()
@ -131,7 +131,7 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
}() }()
// Validate the user-created ExternalName Service and the associated ProxyGroup. // Validate the user-created ExternalName Service and the associated ProxyGroup.
if ok, err := esr.validateClusterResources(ctx, svc, l); err != nil { if ok, err := esr.validateClusterResources(ctx, svc, lg); err != nil {
return res, fmt.Errorf("error validating cluster resources: %w", err) return res, fmt.Errorf("error validating cluster resources: %w", err)
} else if !ok { } else if !ok {
return res, nil return res, nil
@ -141,8 +141,8 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
svc.Finalizers = append(svc.Finalizers, FinalizerName) svc.Finalizers = append(svc.Finalizers, FinalizerName)
if err := esr.updateSvcSpec(ctx, svc); err != nil { if err := esr.updateSvcSpec(ctx, svc); err != nil {
err := fmt.Errorf("failed to add finalizer: %w", err) err := fmt.Errorf("failed to add finalizer: %w", err)
r := svcConfiguredReason(svc, false, l) r := svcConfiguredReason(svc, false, lg)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, lg)
return res, err return res, err
} }
esr.mu.Lock() esr.mu.Lock()
@ -151,16 +151,16 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
esr.mu.Unlock() esr.mu.Unlock()
} }
if err := esr.maybeCleanupProxyGroupConfig(ctx, svc, l); err != nil { if err := esr.maybeCleanupProxyGroupConfig(ctx, svc, lg); err != nil {
err = fmt.Errorf("cleaning up resources for previous ProxyGroup failed: %w", err) err = fmt.Errorf("cleaning up resources for previous ProxyGroup failed: %w", err)
r := svcConfiguredReason(svc, false, l) r := svcConfiguredReason(svc, false, lg)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, lg)
return res, err return res, err
} }
if err := esr.maybeProvision(ctx, svc, l); err != nil { if err := esr.maybeProvision(ctx, svc, lg); err != nil {
if strings.Contains(err.Error(), optimisticLockErrorMsg) { if strings.Contains(err.Error(), optimisticLockErrorMsg) {
l.Infof("optimistic lock error, retrying: %s", err) lg.Infof("optimistic lock error, retrying: %s", err)
} else { } else {
return reconcile.Result{}, err return reconcile.Result{}, err
} }
@ -169,15 +169,15 @@ func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Re
return res, nil return res, nil
} }
func (esr *egressSvcsReconciler) maybeProvision(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) (err error) { func (esr *egressSvcsReconciler) maybeProvision(ctx context.Context, svc *corev1.Service, lg *zap.SugaredLogger) (err error) {
r := svcConfiguredReason(svc, false, l) r := svcConfiguredReason(svc, false, lg)
st := metav1.ConditionFalse st := metav1.ConditionFalse
defer func() { defer func() {
msg := r msg := r
if st != metav1.ConditionTrue && err != nil { if st != metav1.ConditionTrue && err != nil {
msg = err.Error() msg = err.Error()
} }
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, st, r, msg, esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, st, r, msg, esr.clock, lg)
}() }()
crl := egressSvcChildResourceLabels(svc) crl := egressSvcChildResourceLabels(svc)
@ -189,36 +189,36 @@ func (esr *egressSvcsReconciler) maybeProvision(ctx context.Context, svc *corev1
if clusterIPSvc == nil { if clusterIPSvc == nil {
clusterIPSvc = esr.clusterIPSvcForEgress(crl) clusterIPSvc = esr.clusterIPSvcForEgress(crl)
} }
upToDate := svcConfigurationUpToDate(svc, l) upToDate := svcConfigurationUpToDate(svc, lg)
provisioned := true provisioned := true
if !upToDate { if !upToDate {
if clusterIPSvc, provisioned, err = esr.provision(ctx, svc.Annotations[AnnotationProxyGroup], svc, clusterIPSvc, l); err != nil { if clusterIPSvc, provisioned, err = esr.provision(ctx, svc.Annotations[AnnotationProxyGroup], svc, clusterIPSvc, lg); err != nil {
return err return err
} }
} }
if !provisioned { if !provisioned {
l.Infof("unable to provision cluster resources") lg.Infof("unable to provision cluster resources")
return nil return nil
} }
// Update ExternalName Service to point at the ClusterIP Service. // Update ExternalName Service to point at the ClusterIP Service.
clusterDomain := retrieveClusterDomain(esr.tsNamespace, l) clusterDomain := retrieveClusterDomain(esr.tsNamespace, lg)
clusterIPSvcFQDN := fmt.Sprintf("%s.%s.svc.%s", clusterIPSvc.Name, clusterIPSvc.Namespace, clusterDomain) clusterIPSvcFQDN := fmt.Sprintf("%s.%s.svc.%s", clusterIPSvc.Name, clusterIPSvc.Namespace, clusterDomain)
if svc.Spec.ExternalName != clusterIPSvcFQDN { if svc.Spec.ExternalName != clusterIPSvcFQDN {
l.Infof("Configuring ExternalName Service to point to ClusterIP Service %s", clusterIPSvcFQDN) lg.Infof("Configuring ExternalName Service to point to ClusterIP Service %s", clusterIPSvcFQDN)
svc.Spec.ExternalName = clusterIPSvcFQDN svc.Spec.ExternalName = clusterIPSvcFQDN
if err = esr.updateSvcSpec(ctx, svc); err != nil { if err = esr.updateSvcSpec(ctx, svc); err != nil {
err = fmt.Errorf("error updating ExternalName Service: %w", err) err = fmt.Errorf("error updating ExternalName Service: %w", err)
return err return err
} }
} }
r = svcConfiguredReason(svc, true, l) r = svcConfiguredReason(svc, true, lg)
st = metav1.ConditionTrue st = metav1.ConditionTrue
return nil return nil
} }
func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName string, svc, clusterIPSvc *corev1.Service, l *zap.SugaredLogger) (*corev1.Service, bool, error) { func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName string, svc, clusterIPSvc *corev1.Service, lg *zap.SugaredLogger) (*corev1.Service, bool, error) {
l.Infof("updating configuration...") lg.Infof("updating configuration...")
usedPorts, err := esr.usedPortsForPG(ctx, proxyGroupName) usedPorts, err := esr.usedPortsForPG(ctx, proxyGroupName)
if err != nil { if err != nil {
return nil, false, fmt.Errorf("error calculating used ports for ProxyGroup %s: %w", proxyGroupName, err) return nil, false, fmt.Errorf("error calculating used ports for ProxyGroup %s: %w", proxyGroupName, err)
@ -246,7 +246,7 @@ func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName s
} }
} }
if !found { if !found {
l.Debugf("portmapping %s:%d -> %s:%d is no longer required, removing", pm.Protocol, pm.TargetPort.IntVal, pm.Protocol, pm.Port) lg.Debugf("portmapping %s:%d -> %s:%d is no longer required, removing", pm.Protocol, pm.TargetPort.IntVal, pm.Protocol, pm.Port)
clusterIPSvc.Spec.Ports = slices.Delete(clusterIPSvc.Spec.Ports, i, i+1) clusterIPSvc.Spec.Ports = slices.Delete(clusterIPSvc.Spec.Ports, i, i+1)
} }
} }
@ -277,7 +277,7 @@ func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName s
return nil, false, fmt.Errorf("unable to allocate additional ports on ProxyGroup %s, %d ports already used. Create another ProxyGroup or open an issue if you believe this is unexpected.", proxyGroupName, maxPorts) return nil, false, fmt.Errorf("unable to allocate additional ports on ProxyGroup %s, %d ports already used. Create another ProxyGroup or open an issue if you believe this is unexpected.", proxyGroupName, maxPorts)
} }
p := unusedPort(usedPorts) p := unusedPort(usedPorts)
l.Debugf("mapping tailnet target port %d to container port %d", wantsPM.Port, p) lg.Debugf("mapping tailnet target port %d to container port %d", wantsPM.Port, p)
usedPorts.Insert(p) usedPorts.Insert(p)
clusterIPSvc.Spec.Ports = append(clusterIPSvc.Spec.Ports, corev1.ServicePort{ clusterIPSvc.Spec.Ports = append(clusterIPSvc.Spec.Ports, corev1.ServicePort{
Name: wantsPM.Name, Name: wantsPM.Name,
@ -343,14 +343,14 @@ func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName s
return nil, false, fmt.Errorf("error retrieving egress services configuration: %w", err) return nil, false, fmt.Errorf("error retrieving egress services configuration: %w", err)
} }
if cm == nil { if cm == nil {
l.Info("ConfigMap not yet created, waiting..") lg.Info("ConfigMap not yet created, waiting..")
return nil, false, nil return nil, false, nil
} }
tailnetSvc := tailnetSvcName(svc) tailnetSvc := tailnetSvcName(svc)
gotCfg := (*cfgs)[tailnetSvc] gotCfg := (*cfgs)[tailnetSvc]
wantsCfg := egressSvcCfg(svc, clusterIPSvc, esr.tsNamespace, l) wantsCfg := egressSvcCfg(svc, clusterIPSvc, esr.tsNamespace, lg)
if !reflect.DeepEqual(gotCfg, wantsCfg) { if !reflect.DeepEqual(gotCfg, wantsCfg) {
l.Debugf("updating egress services ConfigMap %s", cm.Name) lg.Debugf("updating egress services ConfigMap %s", cm.Name)
mak.Set(cfgs, tailnetSvc, wantsCfg) mak.Set(cfgs, tailnetSvc, wantsCfg)
bs, err := json.Marshal(cfgs) bs, err := json.Marshal(cfgs)
if err != nil { if err != nil {
@ -361,7 +361,7 @@ func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName s
return nil, false, fmt.Errorf("error updating egress services ConfigMap: %w", err) return nil, false, fmt.Errorf("error updating egress services ConfigMap: %w", err)
} }
} }
l.Infof("egress service configuration has been updated") lg.Infof("egress service configuration has been updated")
return clusterIPSvc, true, nil return clusterIPSvc, true, nil
} }
@ -402,7 +402,7 @@ func (esr *egressSvcsReconciler) maybeCleanup(ctx context.Context, svc *corev1.S
return nil return nil
} }
func (esr *egressSvcsReconciler) maybeCleanupProxyGroupConfig(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) error { func (esr *egressSvcsReconciler) maybeCleanupProxyGroupConfig(ctx context.Context, svc *corev1.Service, lg *zap.SugaredLogger) error {
wantsProxyGroup := svc.Annotations[AnnotationProxyGroup] wantsProxyGroup := svc.Annotations[AnnotationProxyGroup]
cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured) cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured)
if cond == nil { if cond == nil {
@ -416,7 +416,7 @@ func (esr *egressSvcsReconciler) maybeCleanupProxyGroupConfig(ctx context.Contex
return nil return nil
} }
esr.logger.Infof("egress Service configured on ProxyGroup %s, wants ProxyGroup %s, cleaning up...", ss[2], wantsProxyGroup) esr.logger.Infof("egress Service configured on ProxyGroup %s, wants ProxyGroup %s, cleaning up...", ss[2], wantsProxyGroup)
if err := esr.ensureEgressSvcCfgDeleted(ctx, svc, l); err != nil { if err := esr.ensureEgressSvcCfgDeleted(ctx, svc, lg); err != nil {
return fmt.Errorf("error deleting egress service config: %w", err) return fmt.Errorf("error deleting egress service config: %w", err)
} }
return nil return nil
@ -471,17 +471,17 @@ func (esr *egressSvcsReconciler) ensureEgressSvcCfgDeleted(ctx context.Context,
Namespace: esr.tsNamespace, Namespace: esr.tsNamespace,
}, },
} }
l := logger.With("ConfigMap", client.ObjectKeyFromObject(cm)) lggr := logger.With("ConfigMap", client.ObjectKeyFromObject(cm))
l.Debug("ensuring that egress service configuration is removed from proxy config") lggr.Debug("ensuring that egress service configuration is removed from proxy config")
if err := esr.Get(ctx, client.ObjectKeyFromObject(cm), cm); apierrors.IsNotFound(err) { if err := esr.Get(ctx, client.ObjectKeyFromObject(cm), cm); apierrors.IsNotFound(err) {
l.Debugf("ConfigMap not found") lggr.Debugf("ConfigMap not found")
return nil return nil
} else if err != nil { } else if err != nil {
return fmt.Errorf("error retrieving ConfigMap: %w", err) return fmt.Errorf("error retrieving ConfigMap: %w", err)
} }
bs := cm.BinaryData[egressservices.KeyEgressServices] bs := cm.BinaryData[egressservices.KeyEgressServices]
if len(bs) == 0 { if len(bs) == 0 {
l.Debugf("ConfigMap does not contain egress service configs") lggr.Debugf("ConfigMap does not contain egress service configs")
return nil return nil
} }
cfgs := &egressservices.Configs{} cfgs := &egressservices.Configs{}
@ -491,12 +491,12 @@ func (esr *egressSvcsReconciler) ensureEgressSvcCfgDeleted(ctx context.Context,
tailnetSvc := tailnetSvcName(svc) tailnetSvc := tailnetSvcName(svc)
_, ok := (*cfgs)[tailnetSvc] _, ok := (*cfgs)[tailnetSvc]
if !ok { if !ok {
l.Debugf("ConfigMap does not contain egress service config, likely because it was already deleted") lggr.Debugf("ConfigMap does not contain egress service config, likely because it was already deleted")
return nil return nil
} }
l.Infof("before deleting config %+#v", *cfgs) lggr.Infof("before deleting config %+#v", *cfgs)
delete(*cfgs, tailnetSvc) delete(*cfgs, tailnetSvc)
l.Infof("after deleting config %+#v", *cfgs) lggr.Infof("after deleting config %+#v", *cfgs)
bs, err := json.Marshal(cfgs) bs, err := json.Marshal(cfgs)
if err != nil { if err != nil {
return fmt.Errorf("error marshalling egress services configs: %w", err) return fmt.Errorf("error marshalling egress services configs: %w", err)
@ -505,7 +505,7 @@ func (esr *egressSvcsReconciler) ensureEgressSvcCfgDeleted(ctx context.Context,
return esr.Update(ctx, cm) return esr.Update(ctx, cm)
} }
func (esr *egressSvcsReconciler) validateClusterResources(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) (bool, error) { func (esr *egressSvcsReconciler) validateClusterResources(ctx context.Context, svc *corev1.Service, lg *zap.SugaredLogger) (bool, error) {
proxyGroupName := svc.Annotations[AnnotationProxyGroup] proxyGroupName := svc.Annotations[AnnotationProxyGroup]
pg := &tsapi.ProxyGroup{ pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
@ -513,36 +513,36 @@ func (esr *egressSvcsReconciler) validateClusterResources(ctx context.Context, s
}, },
} }
if err := esr.Get(ctx, client.ObjectKeyFromObject(pg), pg); apierrors.IsNotFound(err) { if err := esr.Get(ctx, client.ObjectKeyFromObject(pg), pg); apierrors.IsNotFound(err) {
l.Infof("ProxyGroup %q not found, waiting...", proxyGroupName) lg.Infof("ProxyGroup %q not found, waiting...", proxyGroupName)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, lg)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured) tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, nil return false, nil
} else if err != nil { } else if err != nil {
err := fmt.Errorf("unable to retrieve ProxyGroup %s: %w", proxyGroupName, err) err := fmt.Errorf("unable to retrieve ProxyGroup %s: %w", proxyGroupName, err)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, err.Error(), esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, err.Error(), esr.clock, lg)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured) tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, err return false, err
} }
if violations := validateEgressService(svc, pg); len(violations) > 0 { if violations := validateEgressService(svc, pg); len(violations) > 0 {
msg := fmt.Sprintf("invalid egress Service: %s", strings.Join(violations, ", ")) msg := fmt.Sprintf("invalid egress Service: %s", strings.Join(violations, ", "))
esr.recorder.Event(svc, corev1.EventTypeWarning, "INVALIDSERVICE", msg) esr.recorder.Event(svc, corev1.EventTypeWarning, "INVALIDSERVICE", msg)
l.Info(msg) lg.Info(msg)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionFalse, reasonEgressSvcInvalid, msg, esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionFalse, reasonEgressSvcInvalid, msg, esr.clock, lg)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured) tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, nil return false, nil
} }
if !tsoperator.ProxyGroupAvailable(pg) { if !tsoperator.ProxyGroupAvailable(pg) {
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, lg)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured) tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
} }
l.Debugf("egress service is valid") lg.Debugf("egress service is valid")
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionTrue, reasonEgressSvcValid, reasonEgressSvcValid, esr.clock, l) tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionTrue, reasonEgressSvcValid, reasonEgressSvcValid, esr.clock, lg)
return true, nil return true, nil
} }
func egressSvcCfg(externalNameSvc, clusterIPSvc *corev1.Service, ns string, l *zap.SugaredLogger) egressservices.Config { func egressSvcCfg(externalNameSvc, clusterIPSvc *corev1.Service, ns string, lg *zap.SugaredLogger) egressservices.Config {
d := retrieveClusterDomain(ns, l) d := retrieveClusterDomain(ns, lg)
tt := tailnetTargetFromSvc(externalNameSvc) tt := tailnetTargetFromSvc(externalNameSvc)
hep := healthCheckForSvc(clusterIPSvc, d) hep := healthCheckForSvc(clusterIPSvc, d)
cfg := egressservices.Config{ cfg := egressservices.Config{
@ -691,18 +691,18 @@ func egressSvcChildResourceLabels(svc *corev1.Service) map[string]string {
// egressEpsLabels returns labels to be added to an EndpointSlice created for an egress service. // egressEpsLabels returns labels to be added to an EndpointSlice created for an egress service.
func egressSvcEpsLabels(extNSvc, clusterIPSvc *corev1.Service) map[string]string { func egressSvcEpsLabels(extNSvc, clusterIPSvc *corev1.Service) map[string]string {
l := egressSvcChildResourceLabels(extNSvc) lbels := egressSvcChildResourceLabels(extNSvc)
// Adding this label is what makes kube proxy set up rules to route traffic sent to the clusterIP Service to the // Adding this label is what makes kube proxy set up rules to route traffic sent to the clusterIP Service to the
// endpoints defined on this EndpointSlice. // endpoints defined on this EndpointSlice.
// https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
l[discoveryv1.LabelServiceName] = clusterIPSvc.Name lbels[discoveryv1.LabelServiceName] = clusterIPSvc.Name
// Kubernetes recommends setting this label. // Kubernetes recommends setting this label.
// https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#management // https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#management
l[discoveryv1.LabelManagedBy] = "tailscale.com" lbels[discoveryv1.LabelManagedBy] = "tailscale.com"
return l return lbels
} }
func svcConfigurationUpToDate(svc *corev1.Service, l *zap.SugaredLogger) bool { func svcConfigurationUpToDate(svc *corev1.Service, lg *zap.SugaredLogger) bool {
cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured) cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured)
if cond == nil { if cond == nil {
return false return false
@ -710,21 +710,21 @@ func svcConfigurationUpToDate(svc *corev1.Service, l *zap.SugaredLogger) bool {
if cond.Status != metav1.ConditionTrue { if cond.Status != metav1.ConditionTrue {
return false return false
} }
wantsReadyReason := svcConfiguredReason(svc, true, l) wantsReadyReason := svcConfiguredReason(svc, true, lg)
return strings.EqualFold(wantsReadyReason, cond.Reason) return strings.EqualFold(wantsReadyReason, cond.Reason)
} }
func cfgHash(c cfg, l *zap.SugaredLogger) string { func cfgHash(c cfg, lg *zap.SugaredLogger) string {
bs, err := json.Marshal(c) bs, err := json.Marshal(c)
if err != nil { if err != nil {
// Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace. // Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace.
l.Infof("error marhsalling Config: %v", err) lg.Infof("error marhsalling Config: %v", err)
return "" return ""
} }
h := sha256.New() h := sha256.New()
if _, err := h.Write(bs); err != nil { if _, err := h.Write(bs); err != nil {
// Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace. // Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace.
l.Infof("error producing Config hash: %v", err) lg.Infof("error producing Config hash: %v", err)
return "" return ""
} }
return fmt.Sprintf("%x", h.Sum(nil)) return fmt.Sprintf("%x", h.Sum(nil))
@ -736,7 +736,7 @@ type cfg struct {
ProxyGroup string `json:"proxyGroup"` ProxyGroup string `json:"proxyGroup"`
} }
func svcConfiguredReason(svc *corev1.Service, configured bool, l *zap.SugaredLogger) string { func svcConfiguredReason(svc *corev1.Service, configured bool, lg *zap.SugaredLogger) string {
var r string var r string
if configured { if configured {
r = "ConfiguredFor:" r = "ConfiguredFor:"
@ -750,7 +750,7 @@ func svcConfiguredReason(svc *corev1.Service, configured bool, l *zap.SugaredLog
TailnetTarget: tt, TailnetTarget: tt,
ProxyGroup: svc.Annotations[AnnotationProxyGroup], ProxyGroup: svc.Annotations[AnnotationProxyGroup],
} }
r += fmt.Sprintf(":Config:%s", cfgHash(s, l)) r += fmt.Sprintf(":Config:%s", cfgHash(s, lg))
return r return r
} }

@ -249,9 +249,9 @@ func portsForEndpointSlice(svc *corev1.Service) []discoveryv1.EndpointPort {
return ports return ports
} }
func mustHaveConfigForSvc(t *testing.T, cl client.Client, extNSvc, clusterIPSvc *corev1.Service, cm *corev1.ConfigMap, l *zap.Logger) { func mustHaveConfigForSvc(t *testing.T, cl client.Client, extNSvc, clusterIPSvc *corev1.Service, cm *corev1.ConfigMap, lg *zap.Logger) {
t.Helper() t.Helper()
wantsCfg := egressSvcCfg(extNSvc, clusterIPSvc, clusterIPSvc.Namespace, l.Sugar()) wantsCfg := egressSvcCfg(extNSvc, clusterIPSvc, clusterIPSvc.Namespace, lg.Sugar())
if err := cl.Get(context.Background(), client.ObjectKeyFromObject(cm), cm); err != nil { if err := cl.Get(context.Background(), client.ObjectKeyFromObject(cm), cm); err != nil {
t.Fatalf("Error retrieving ConfigMap: %v", err) t.Fatalf("Error retrieving ConfigMap: %v", err)
} }

@ -69,7 +69,7 @@ func main() {
}() }()
log.Print("Templating Helm chart contents") log.Print("Templating Helm chart contents")
helmTmplCmd := exec.Command("./tool/helm", "template", "operator", "./cmd/k8s-operator/deploy/chart", helmTmplCmd := exec.Command("./tool/helm", "template", "operator", "./cmd/k8s-operator/deploy/chart",
"--namespace=tailscale") "--namespace=tailscale", "--set=oauth.clientSecret=''")
helmTmplCmd.Dir = repoRoot helmTmplCmd.Dir = repoRoot
var out bytes.Buffer var out bytes.Buffer
helmTmplCmd.Stdout = &out helmTmplCmd.Stdout = &out
@ -144,7 +144,7 @@ func generate(baseDir string) error {
if _, err := file.Write([]byte(helmConditionalEnd)); err != nil { if _, err := file.Write([]byte(helmConditionalEnd)); err != nil {
return fmt.Errorf("error writing helm if-statement end: %w", err) return fmt.Errorf("error writing helm if-statement end: %w", err)
} }
return nil return file.Close()
} }
for _, crd := range []struct { for _, crd := range []struct {
crdPath, templatePath string crdPath, templatePath string

@ -7,26 +7,50 @@ package main
import ( import (
"bytes" "bytes"
"context"
"net"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
"testing" "testing"
"time"
"tailscale.com/tstest/nettest"
"tailscale.com/util/cibuild"
) )
func Test_generate(t *testing.T) { func Test_generate(t *testing.T) {
nettest.SkipIfNoNetwork(t)
ctx, cancel := context.WithTimeout(t.Context(), 10*time.Second)
defer cancel()
if _, err := net.DefaultResolver.LookupIPAddr(ctx, "get.helm.sh"); err != nil {
// https://github.com/helm/helm/issues/31434
t.Skipf("get.helm.sh seems down or unreachable; skipping test")
}
base, err := os.Getwd() base, err := os.Getwd()
base = filepath.Join(base, "../../../") base = filepath.Join(base, "../../../")
if err != nil { if err != nil {
t.Fatalf("error getting current working directory: %v", err) t.Fatalf("error getting current working directory: %v", err)
} }
defer cleanup(base) defer cleanup(base)
helmCLIPath := filepath.Join(base, "tool/helm")
if out, err := exec.Command(helmCLIPath, "version").CombinedOutput(); err != nil && cibuild.On() {
// It's not just DNS. Azure is generating bogus certs within GitHub Actions at least for
// helm. So try to run it and see if we can even fetch it.
//
// https://github.com/helm/helm/issues/31434
t.Skipf("error fetching helm; skipping test in CI: %v, %s", err, out)
}
if err := generate(base); err != nil { if err := generate(base); err != nil {
t.Fatalf("CRD template generation: %v", err) t.Fatalf("CRD template generation: %v", err)
} }
tempDir := t.TempDir() tempDir := t.TempDir()
helmCLIPath := filepath.Join(base, "tool/helm")
helmChartTemplatesPath := filepath.Join(base, "cmd/k8s-operator/deploy/chart") helmChartTemplatesPath := filepath.Join(base, "cmd/k8s-operator/deploy/chart")
helmPackageCmd := exec.Command(helmCLIPath, "package", helmChartTemplatesPath, "--destination", tempDir, "--version", "0.0.1") helmPackageCmd := exec.Command(helmCLIPath, "package", helmChartTemplatesPath, "--destination", tempDir, "--version", "0.0.1")
helmPackageCmd.Stderr = os.Stderr helmPackageCmd.Stderr = os.Stderr

@ -29,6 +29,7 @@ import (
"k8s.io/client-go/tools/record" "k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/internal/client/tailscale" "tailscale.com/internal/client/tailscale"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
@ -154,11 +155,6 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
// needs to be explicitly enabled for a tailnet to be able to use them. // needs to be explicitly enabled for a tailnet to be able to use them.
serviceName := tailcfg.ServiceName("svc:" + hostname) serviceName := tailcfg.ServiceName("svc:" + hostname)
existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName) existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName)
if isErrorFeatureFlagNotEnabled(err) {
logger.Warn(msgFeatureFlagNotEnabled)
r.recorder.Event(ing, corev1.EventTypeWarning, warningTailscaleServiceFeatureFlagNotEnabled, msgFeatureFlagNotEnabled)
return false, nil
}
if err != nil && !isErrorTailscaleServiceNotFound(err) { if err != nil && !isErrorTailscaleServiceNotFound(err) {
return false, fmt.Errorf("error getting Tailscale Service %q: %w", hostname, err) return false, fmt.Errorf("error getting Tailscale Service %q: %w", hostname, err)
} }
@ -294,6 +290,25 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
ingCfg.Web[epHTTP] = &ipn.WebServerConfig{ ingCfg.Web[epHTTP] = &ipn.WebServerConfig{
Handlers: handlers, Handlers: handlers,
} }
if isHTTPRedirectEnabled(ing) {
logger.Warnf("Both HTTP endpoint and HTTP redirect flags are enabled: ignoring HTTP redirect.")
}
} else if isHTTPRedirectEnabled(ing) {
logger.Infof("HTTP redirect enabled, setting up port 80 redirect handlers")
epHTTP := ipn.HostPort(fmt.Sprintf("%s:80", dnsName))
ingCfg.TCP[80] = &ipn.TCPPortHandler{HTTP: true}
ingCfg.Web[epHTTP] = &ipn.WebServerConfig{
Handlers: map[string]*ipn.HTTPHandler{},
}
web80 := ingCfg.Web[epHTTP]
for mountPoint := range handlers {
// We send a 301 - Moved Permanently redirect from HTTP to HTTPS
redirectURL := "301:https://${HOST}${REQUEST_URI}"
logger.Debugf("Creating redirect handler: %s -> %s", mountPoint, redirectURL)
web80.Handlers[mountPoint] = &ipn.HTTPHandler{
Redirect: redirectURL,
}
}
} }
var gotCfg *ipn.ServiceConfig var gotCfg *ipn.ServiceConfig
@ -320,7 +335,7 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
} }
tsSvcPorts := []string{"tcp:443"} // always 443 for Ingress tsSvcPorts := []string{"tcp:443"} // always 443 for Ingress
if isHTTPEndpointEnabled(ing) { if isHTTPEndpointEnabled(ing) || isHTTPRedirectEnabled(ing) {
tsSvcPorts = append(tsSvcPorts, "tcp:80") tsSvcPorts = append(tsSvcPorts, "tcp:80")
} }
@ -350,7 +365,7 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
// 5. Update tailscaled's AdvertiseServices config, which should add the Tailscale Service // 5. Update tailscaled's AdvertiseServices config, which should add the Tailscale Service
// IPs to the ProxyGroup Pods' AllowedIPs in the next netmap update if approved. // IPs to the ProxyGroup Pods' AllowedIPs in the next netmap update if approved.
mode := serviceAdvertisementHTTPS mode := serviceAdvertisementHTTPS
if isHTTPEndpointEnabled(ing) { if isHTTPEndpointEnabled(ing) || isHTTPRedirectEnabled(ing) {
mode = serviceAdvertisementHTTPAndHTTPS mode = serviceAdvertisementHTTPAndHTTPS
} }
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg.Name, serviceName, mode, logger); err != nil { if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg.Name, serviceName, mode, logger); err != nil {
@ -381,7 +396,7 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
Port: 443, Port: 443,
}) })
} }
if isHTTPEndpointEnabled(ing) { if isHTTPEndpointEnabled(ing) || isHTTPRedirectEnabled(ing) {
ports = append(ports, networkingv1.IngressPortStatus{ ports = append(ports, networkingv1.IngressPortStatus{
Protocol: "TCP", Protocol: "TCP",
Port: 80, Port: 80,
@ -453,11 +468,6 @@ func (r *HAIngressReconciler) maybeCleanupProxyGroup(ctx context.Context, proxyG
if !found { if !found {
logger.Infof("Tailscale Service %q is not owned by any Ingress, cleaning up", tsSvcName) logger.Infof("Tailscale Service %q is not owned by any Ingress, cleaning up", tsSvcName)
tsService, err := r.tsClient.GetVIPService(ctx, tsSvcName) tsService, err := r.tsClient.GetVIPService(ctx, tsSvcName)
if isErrorFeatureFlagNotEnabled(err) {
msg := fmt.Sprintf("Unable to proceed with cleanup: %s.", msgFeatureFlagNotEnabled)
logger.Warn(msg)
return false, nil
}
if isErrorTailscaleServiceNotFound(err) { if isErrorTailscaleServiceNotFound(err) {
return false, nil return false, nil
} }
@ -514,16 +524,7 @@ func (r *HAIngressReconciler) maybeCleanup(ctx context.Context, hostname string,
logger.Infof("Ensuring that Tailscale Service %q configuration is cleaned up", hostname) logger.Infof("Ensuring that Tailscale Service %q configuration is cleaned up", hostname)
serviceName := tailcfg.ServiceName("svc:" + hostname) serviceName := tailcfg.ServiceName("svc:" + hostname)
svc, err := r.tsClient.GetVIPService(ctx, serviceName) svc, err := r.tsClient.GetVIPService(ctx, serviceName)
if err != nil { if err != nil && !isErrorTailscaleServiceNotFound(err) {
if isErrorFeatureFlagNotEnabled(err) {
msg := fmt.Sprintf("Unable to proceed with cleanup: %s.", msgFeatureFlagNotEnabled)
logger.Warn(msg)
r.recorder.Event(ing, corev1.EventTypeWarning, warningTailscaleServiceFeatureFlagNotEnabled, msg)
return false, nil
}
if isErrorTailscaleServiceNotFound(err) {
return false, nil
}
return false, fmt.Errorf("error getting Tailscale Service: %w", err) return false, fmt.Errorf("error getting Tailscale Service: %w", err)
} }
@ -729,10 +730,15 @@ func (r *HAIngressReconciler) cleanupTailscaleService(ctx context.Context, svc *
} }
if len(o.OwnerRefs) == 1 { if len(o.OwnerRefs) == 1 {
logger.Infof("Deleting Tailscale Service %q", svc.Name) logger.Infof("Deleting Tailscale Service %q", svc.Name)
return false, r.tsClient.DeleteVIPService(ctx, svc.Name) if err = r.tsClient.DeleteVIPService(ctx, svc.Name); err != nil && !isErrorTailscaleServiceNotFound(err) {
return false, err
} }
return false, nil
}
o.OwnerRefs = slices.Delete(o.OwnerRefs, ix, ix+1) o.OwnerRefs = slices.Delete(o.OwnerRefs, ix, ix+1)
logger.Infof("Deleting Tailscale Service %q", svc.Name) logger.Infof("Creating/Updating Tailscale Service %q", svc.Name)
json, err := json.Marshal(o) json, err := json.Marshal(o)
if err != nil { if err != nil {
return false, fmt.Errorf("error marshalling updated Tailscale Service owner reference: %w", err) return false, fmt.Errorf("error marshalling updated Tailscale Service owner reference: %w", err)
@ -1122,14 +1128,6 @@ func hasCerts(ctx context.Context, cl client.Client, lc localClient, ns string,
return len(cert) > 0 && len(key) > 0, nil return len(cert) > 0 && len(key) > 0, nil
} }
func isErrorFeatureFlagNotEnabled(err error) bool {
// messageFFNotEnabled is the error message returned by
// Tailscale control plane when a Tailscale Service API call is made for a
// tailnet that does not have the Tailscale Services feature flag enabled.
const messageFFNotEnabled = "feature unavailable for tailnet"
return err != nil && strings.Contains(err.Error(), messageFFNotEnabled)
}
func isErrorTailscaleServiceNotFound(err error) bool { func isErrorTailscaleServiceNotFound(err error) bool {
var errResp tailscale.ErrResponse var errResp tailscale.ErrResponse
ok := errors.As(err, &errResp) ok := errors.As(err, &errResp)

@ -25,6 +25,7 @@ import (
"k8s.io/client-go/tools/record" "k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake" "sigs.k8s.io/controller-runtime/pkg/client/fake"
"tailscale.com/internal/client/tailscale" "tailscale.com/internal/client/tailscale"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
@ -67,7 +68,7 @@ func TestIngressPGReconciler(t *testing.T) {
// Verify initial reconciliation // Verify initial reconciliation
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net") populateTLSSecret(t, fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:my-svc", false) verifyServeConfig(t, fc, "svc:my-svc", false)
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:443"}) verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:443"})
@ -89,7 +90,7 @@ func TestIngressPGReconciler(t *testing.T) {
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
// Verify Tailscale Service uses custom tags // Verify Tailscale Service uses custom tags
tsSvc, err := ft.GetVIPService(context.Background(), "svc:my-svc") tsSvc, err := ft.GetVIPService(t.Context(), "svc:my-svc")
if err != nil { if err != nil {
t.Fatalf("getting Tailscale Service: %v", err) t.Fatalf("getting Tailscale Service: %v", err)
} }
@ -134,7 +135,7 @@ func TestIngressPGReconciler(t *testing.T) {
// Verify second Ingress reconciliation // Verify second Ingress reconciliation
expectReconciled(t, ingPGR, "default", "my-other-ingress") expectReconciled(t, ingPGR, "default", "my-other-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-other-svc.ts.net") populateTLSSecret(t, fc, "test-pg", "my-other-svc.ts.net")
expectReconciled(t, ingPGR, "default", "my-other-ingress") expectReconciled(t, ingPGR, "default", "my-other-ingress")
verifyServeConfig(t, fc, "svc:my-other-svc", false) verifyServeConfig(t, fc, "svc:my-other-svc", false)
verifyTailscaleService(t, ft, "svc:my-other-svc", []string{"tcp:443"}) verifyTailscaleService(t, ft, "svc:my-other-svc", []string{"tcp:443"})
@ -151,14 +152,14 @@ func TestIngressPGReconciler(t *testing.T) {
verifyTailscaledConfig(t, fc, "test-pg", []string{"svc:my-svc", "svc:my-other-svc"}) verifyTailscaledConfig(t, fc, "test-pg", []string{"svc:my-svc", "svc:my-other-svc"})
// Delete second Ingress // Delete second Ingress
if err := fc.Delete(context.Background(), ing2); err != nil { if err := fc.Delete(t.Context(), ing2); err != nil {
t.Fatalf("deleting second Ingress: %v", err) t.Fatalf("deleting second Ingress: %v", err)
} }
expectReconciled(t, ingPGR, "default", "my-other-ingress") expectReconciled(t, ingPGR, "default", "my-other-ingress")
// Verify second Ingress cleanup // Verify second Ingress cleanup
cm := &corev1.ConfigMap{} cm := &corev1.ConfigMap{}
if err := fc.Get(context.Background(), types.NamespacedName{ if err := fc.Get(t.Context(), types.NamespacedName{
Name: "test-pg-ingress-config", Name: "test-pg-ingress-config",
Namespace: "operator-ns", Namespace: "operator-ns",
}, cm); err != nil { }, cm); err != nil {
@ -199,7 +200,7 @@ func TestIngressPGReconciler(t *testing.T) {
expectEqual(t, fc, certSecretRoleBinding(pg, "operator-ns", "my-svc.ts.net")) expectEqual(t, fc, certSecretRoleBinding(pg, "operator-ns", "my-svc.ts.net"))
// Delete the first Ingress and verify cleanup // Delete the first Ingress and verify cleanup
if err := fc.Delete(context.Background(), ing); err != nil { if err := fc.Delete(t.Context(), ing); err != nil {
t.Fatalf("deleting Ingress: %v", err) t.Fatalf("deleting Ingress: %v", err)
} }
@ -207,7 +208,7 @@ func TestIngressPGReconciler(t *testing.T) {
// Verify the ConfigMap was cleaned up // Verify the ConfigMap was cleaned up
cm = &corev1.ConfigMap{} cm = &corev1.ConfigMap{}
if err := fc.Get(context.Background(), types.NamespacedName{ if err := fc.Get(t.Context(), types.NamespacedName{
Name: "test-pg-second-ingress-config", Name: "test-pg-second-ingress-config",
Namespace: "operator-ns", Namespace: "operator-ns",
}, cm); err != nil { }, cm); err != nil {
@ -228,6 +229,47 @@ func TestIngressPGReconciler(t *testing.T) {
expectMissing[corev1.Secret](t, fc, "operator-ns", "my-svc.ts.net") expectMissing[corev1.Secret](t, fc, "operator-ns", "my-svc.ts.net")
expectMissing[rbacv1.Role](t, fc, "operator-ns", "my-svc.ts.net") expectMissing[rbacv1.Role](t, fc, "operator-ns", "my-svc.ts.net")
expectMissing[rbacv1.RoleBinding](t, fc, "operator-ns", "my-svc.ts.net") expectMissing[rbacv1.RoleBinding](t, fc, "operator-ns", "my-svc.ts.net")
// Create a third ingress
ing3 := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "my-other-ingress",
Namespace: "default",
UID: types.UID("5678-UID"),
Annotations: map[string]string{
"tailscale.com/proxy-group": "test-pg",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"my-other-svc.tailnetxyz.ts.net"}},
},
},
}
mustCreate(t, fc, ing3)
expectReconciled(t, ingPGR, ing3.Namespace, ing3.Name)
// Delete the service from "control"
ft.vipServices = make(map[tailcfg.ServiceName]*tailscale.VIPService)
// Delete the ingress and confirm we don't get stuck due to the VIP service not existing.
if err = fc.Delete(t.Context(), ing3); err != nil {
t.Fatalf("deleting Ingress: %v", err)
}
expectReconciled(t, ingPGR, ing3.Namespace, ing3.Name)
expectMissing[networkingv1.Ingress](t, fc, ing3.Namespace, ing3.Name)
} }
func TestIngressPGReconciler_UpdateIngressHostname(t *testing.T) { func TestIngressPGReconciler_UpdateIngressHostname(t *testing.T) {
@ -262,7 +304,7 @@ func TestIngressPGReconciler_UpdateIngressHostname(t *testing.T) {
// Verify initial reconciliation // Verify initial reconciliation
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net") populateTLSSecret(t, fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:my-svc", false) verifyServeConfig(t, fc, "svc:my-svc", false)
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:443"}) verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:443"})
@ -273,13 +315,13 @@ func TestIngressPGReconciler_UpdateIngressHostname(t *testing.T) {
ing.Spec.TLS[0].Hosts[0] = "updated-svc" ing.Spec.TLS[0].Hosts[0] = "updated-svc"
}) })
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "updated-svc.ts.net") populateTLSSecret(t, fc, "test-pg", "updated-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:updated-svc", false) verifyServeConfig(t, fc, "svc:updated-svc", false)
verifyTailscaleService(t, ft, "svc:updated-svc", []string{"tcp:443"}) verifyTailscaleService(t, ft, "svc:updated-svc", []string{"tcp:443"})
verifyTailscaledConfig(t, fc, "test-pg", []string{"svc:updated-svc"}) verifyTailscaledConfig(t, fc, "test-pg", []string{"svc:updated-svc"})
_, err := ft.GetVIPService(context.Background(), tailcfg.ServiceName("svc:my-svc")) _, err := ft.GetVIPService(context.Background(), "svc:my-svc")
if err == nil { if err == nil {
t.Fatalf("svc:my-svc not cleaned up") t.Fatalf("svc:my-svc not cleaned up")
} }
@ -500,7 +542,7 @@ func TestIngressPGReconciler_HTTPEndpoint(t *testing.T) {
// Verify initial reconciliation with HTTP enabled // Verify initial reconciliation with HTTP enabled
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net") populateTLSSecret(t, fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress") expectReconciled(t, ingPGR, "default", "test-ingress")
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:80", "tcp:443"}) verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:80", "tcp:443"})
verifyServeConfig(t, fc, "svc:my-svc", true) verifyServeConfig(t, fc, "svc:my-svc", true)
@ -576,6 +618,236 @@ func TestIngressPGReconciler_HTTPEndpoint(t *testing.T) {
} }
} }
func TestIngressPGReconciler_HTTPRedirect(t *testing.T) {
ingPGR, fc, ft := setupIngressTest(t)
// Create backend Service that the Ingress will route to
backendSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.0.0.1",
Ports: []corev1.ServicePort{
{
Port: 8080,
},
},
},
}
mustCreate(t, fc, backendSvc)
// Create test Ingress with HTTP redirect enabled
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test-ingress",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
"tailscale.com/proxy-group": "test-pg",
"tailscale.com/http-redirect": "true",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"my-svc"}},
},
},
}
if err := fc.Create(context.Background(), ing); err != nil {
t.Fatal(err)
}
// Verify initial reconciliation with HTTP redirect enabled
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(t, fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
// Verify Tailscale Service includes both tcp:80 and tcp:443
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:80", "tcp:443"})
// Verify Ingress status includes port 80
ing = &networkingv1.Ingress{}
if err := fc.Get(context.Background(), types.NamespacedName{
Name: "test-ingress",
Namespace: "default",
}, ing); err != nil {
t.Fatal(err)
}
// Add the Tailscale Service to prefs to have the Ingress recognised as ready
mustCreate(t, fc, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pg-0",
Namespace: "operator-ns",
Labels: pgSecretLabels("test-pg", kubetypes.LabelSecretTypeState),
},
Data: map[string][]byte{
"_current-profile": []byte("profile-foo"),
"profile-foo": []byte(`{"AdvertiseServices":["svc:my-svc"],"Config":{"NodeID":"node-foo"}}`),
},
})
// Reconcile and re-fetch Ingress
expectReconciled(t, ingPGR, "default", "test-ingress")
if err := fc.Get(context.Background(), client.ObjectKeyFromObject(ing), ing); err != nil {
t.Fatal(err)
}
wantStatus := []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
{Port: 80, Protocol: "TCP"},
}
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus) {
t.Errorf("incorrect status ports: got %v, want %v",
ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus)
}
// Remove HTTP redirect annotation
mustUpdate(t, fc, "default", "test-ingress", func(ing *networkingv1.Ingress) {
delete(ing.Annotations, "tailscale.com/http-redirect")
})
// Verify reconciliation after removing HTTP redirect
expectReconciled(t, ingPGR, "default", "test-ingress")
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:443"})
// Verify Ingress status no longer includes port 80
ing = &networkingv1.Ingress{}
if err := fc.Get(context.Background(), types.NamespacedName{
Name: "test-ingress",
Namespace: "default",
}, ing); err != nil {
t.Fatal(err)
}
wantStatus = []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
}
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus) {
t.Errorf("incorrect status ports after removing redirect: got %v, want %v",
ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus)
}
}
func TestIngressPGReconciler_HTTPEndpointAndRedirectConflict(t *testing.T) {
ingPGR, fc, ft := setupIngressTest(t)
// Create backend Service that the Ingress will route to
backendSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.0.0.1",
Ports: []corev1.ServicePort{
{
Port: 8080,
},
},
},
}
mustCreate(t, fc, backendSvc)
// Create test Ingress with both HTTP endpoint and HTTP redirect enabled
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test-ingress",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
"tailscale.com/proxy-group": "test-pg",
"tailscale.com/http-endpoint": "enabled",
"tailscale.com/http-redirect": "true",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"my-svc"}},
},
},
}
if err := fc.Create(context.Background(), ing); err != nil {
t.Fatal(err)
}
// Verify initial reconciliation - HTTP endpoint should take precedence
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(t, fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
// Verify Tailscale Service includes both tcp:80 and tcp:443
verifyTailscaleService(t, ft, "svc:my-svc", []string{"tcp:80", "tcp:443"})
// Verify the serve config has HTTP endpoint handlers on port 80, NOT redirect handlers
cm := &corev1.ConfigMap{}
if err := fc.Get(context.Background(), types.NamespacedName{
Name: "test-pg-ingress-config",
Namespace: "operator-ns",
}, cm); err != nil {
t.Fatalf("getting ConfigMap: %v", err)
}
// Verify Ingress status includes port 80
ing = &networkingv1.Ingress{}
if err := fc.Get(context.Background(), types.NamespacedName{
Name: "test-ingress",
Namespace: "default",
}, ing); err != nil {
t.Fatal(err)
}
// Add the Tailscale Service to prefs to have the Ingress recognised as ready
mustCreate(t, fc, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pg-0",
Namespace: "operator-ns",
Labels: pgSecretLabels("test-pg", kubetypes.LabelSecretTypeState),
},
Data: map[string][]byte{
"_current-profile": []byte("profile-foo"),
"profile-foo": []byte(`{"AdvertiseServices":["svc:my-svc"],"Config":{"NodeID":"node-foo"}}`),
},
})
// Reconcile and re-fetch Ingress
expectReconciled(t, ingPGR, "default", "test-ingress")
if err := fc.Get(context.Background(), client.ObjectKeyFromObject(ing), ing); err != nil {
t.Fatal(err)
}
wantStatus := []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
{Port: 80, Protocol: "TCP"},
}
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus) {
t.Errorf("incorrect status ports: got %v, want %v",
ing.Status.LoadBalancer.Ingress[0].Ports, wantStatus)
}
}
func TestIngressPGReconciler_MultiCluster(t *testing.T) { func TestIngressPGReconciler_MultiCluster(t *testing.T) {
ingPGR, fc, ft := setupIngressTest(t) ingPGR, fc, ft := setupIngressTest(t)
ingPGR.operatorID = "operator-1" ingPGR.operatorID = "operator-1"
@ -717,7 +989,9 @@ func TestOwnerAnnotations(t *testing.T) {
} }
} }
func populateTLSSecret(ctx context.Context, c client.Client, pgName, domain string) error { func populateTLSSecret(t *testing.T, c client.Client, pgName, domain string) {
t.Helper()
secret := &corev1.Secret{ secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: domain, Name: domain,
@ -736,10 +1010,12 @@ func populateTLSSecret(ctx context.Context, c client.Client, pgName, domain stri
}, },
} }
_, err := createOrUpdate(ctx, c, "operator-ns", secret, func(s *corev1.Secret) { _, err := createOrUpdate(t.Context(), c, "operator-ns", secret, func(s *corev1.Secret) {
s.Data = secret.Data s.Data = secret.Data
}) })
return err if err != nil {
t.Fatalf("failed to populate TLS secret: %v", err)
}
} }
func verifyTailscaleService(t *testing.T, ft *fakeTSClient, serviceName string, wantPorts []string) { func verifyTailscaleService(t *testing.T, ft *fakeTSClient, serviceName string, wantPorts []string) {

@ -204,6 +204,27 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil return nil
} }
if isHTTPRedirectEnabled(ing) {
logger.Infof("HTTP redirect enabled, setting up port 80 redirect handlers")
const magic80 = "${TS_CERT_DOMAIN}:80"
sc.TCP[80] = &ipn.TCPPortHandler{HTTP: true}
sc.Web[magic80] = &ipn.WebServerConfig{
Handlers: map[string]*ipn.HTTPHandler{},
}
if sc.AllowFunnel != nil && sc.AllowFunnel[magic443] {
sc.AllowFunnel[magic80] = true
}
web80 := sc.Web[magic80]
for mountPoint := range handlers {
// We send a 301 - Moved Permanently redirect from HTTP to HTTPS
redirectURL := "301:https://${HOST}${REQUEST_URI}"
logger.Debugf("Creating redirect handler: %s -> %s", mountPoint, redirectURL)
web80.Handlers[mountPoint] = &ipn.HTTPHandler{
Redirect: redirectURL,
}
}
}
crl := childResourceLabels(ing.Name, ing.Namespace, "ingress") crl := childResourceLabels(ing.Name, ing.Namespace, "ingress")
var tags []string var tags []string
if tstr, ok := ing.Annotations[AnnotationTags]; ok { if tstr, ok := ing.Annotations[AnnotationTags]; ok {
@ -244,14 +265,21 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
} }
logger.Debugf("setting Ingress hostname to %q", dev.ingressDNSName) logger.Debugf("setting Ingress hostname to %q", dev.ingressDNSName)
ing.Status.LoadBalancer.Ingress = append(ing.Status.LoadBalancer.Ingress, networkingv1.IngressLoadBalancerIngress{ ports := []networkingv1.IngressPortStatus{
Hostname: dev.ingressDNSName,
Ports: []networkingv1.IngressPortStatus{
{ {
Protocol: "TCP", Protocol: "TCP",
Port: 443, Port: 443,
}, },
}, }
if isHTTPRedirectEnabled(ing) {
ports = append(ports, networkingv1.IngressPortStatus{
Protocol: "TCP",
Port: 80,
})
}
ing.Status.LoadBalancer.Ingress = append(ing.Status.LoadBalancer.Ingress, networkingv1.IngressLoadBalancerIngress{
Hostname: dev.ingressDNSName,
Ports: ports,
}) })
} }
@ -363,6 +391,12 @@ func handlersForIngress(ctx context.Context, ing *networkingv1.Ingress, cl clien
return handlers, nil return handlers, nil
} }
// isHTTPRedirectEnabled returns true if HTTP redirect is enabled for the Ingress.
// The annotation is tailscale.com/http-redirect and it should be set to "true".
func isHTTPRedirectEnabled(ing *networkingv1.Ingress) bool {
return ing.Annotations != nil && opt.Bool(ing.Annotations[AnnotationHTTPRedirect]).EqualBool(true)
}
// hostnameForIngress returns the hostname for an Ingress resource. // hostnameForIngress returns the hostname for an Ingress resource.
// If the Ingress has TLS configured with a host, it returns the first component of that host. // If the Ingress has TLS configured with a host, it returns the first component of that host.
// Otherwise, it returns a hostname derived from the Ingress name and namespace. // Otherwise, it returns a hostname derived from the Ingress name and namespace.

@ -7,6 +7,7 @@ package main
import ( import (
"context" "context"
"reflect"
"testing" "testing"
"go.uber.org/zap" "go.uber.org/zap"
@ -64,12 +65,14 @@ func TestTailscaleIngress(t *testing.T) {
parentType: "ingress", parentType: "ingress",
hostname: "default-test", hostname: "default-test",
app: kubetypes.AppIngressResource, app: kubetypes.AppIngressResource,
} serveConfig: &ipn.ServeConfig{
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}}, TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}}, Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}}},
},
} }
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, fc, opts)) expectEqual(t, fc, expectedSecret(t, fc, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress")) expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
@ -156,12 +159,14 @@ func TestTailscaleIngressHostname(t *testing.T) {
parentType: "ingress", parentType: "ingress",
hostname: "default-test", hostname: "default-test",
app: kubetypes.AppIngressResource, app: kubetypes.AppIngressResource,
} serveConfig: &ipn.ServeConfig{
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}}, TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}}, Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}}},
},
} }
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, fc, opts)) expectEqual(t, fc, expectedSecret(t, fc, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress")) expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
@ -276,12 +281,14 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
parentType: "ingress", parentType: "ingress",
hostname: "default-test", hostname: "default-test",
app: kubetypes.AppIngressResource, app: kubetypes.AppIngressResource,
} serveConfig: &ipn.ServeConfig{
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}}, TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}}, Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}}},
},
} }
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, fc, opts)) expectEqual(t, fc, expectedSecret(t, fc, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress")) expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
@ -368,10 +375,6 @@ func TestTailscaleIngressWithServiceMonitor(t *testing.T) {
} }
expectReconciled(t, ingR, "default", "test") expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress") fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}},
}
opts := configOpts{ opts := configOpts{
stsName: shortName, stsName: shortName,
secretName: fullName, secretName: fullName,
@ -382,7 +385,13 @@ func TestTailscaleIngressWithServiceMonitor(t *testing.T) {
app: kubetypes.AppIngressResource, app: kubetypes.AppIngressResource,
namespaced: true, namespaced: true,
proxyType: proxyTypeIngressResource, proxyType: proxyTypeIngressResource,
serveConfig: serveConfig, serveConfig: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}}},
},
resourceVersion: "1", resourceVersion: "1",
} }
@ -717,12 +726,14 @@ func TestEmptyPath(t *testing.T) {
parentType: "ingress", parentType: "ingress",
hostname: "foo", hostname: "foo",
app: kubetypes.AppIngressResource, app: kubetypes.AppIngressResource,
} serveConfig: &ipn.ServeConfig{
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}}, TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}}, Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}}},
},
} }
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, fc, opts)) expectEqual(t, fc, expectedSecret(t, fc, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress")) expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
@ -816,3 +827,101 @@ func backend() *networkingv1.IngressBackend {
}, },
} }
} }
func TestTailscaleIngressWithHTTPRedirect(t *testing.T) {
fc := fake.NewFakeClient(ingressClass())
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
ingR := &IngressReconciler{
Client: fc,
ingressClassName: "tailscale",
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
tsnetServer: fakeTsnetServer,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Create Ingress with HTTP redirect annotation
ing := ingress()
mak.Set(&ing.Annotations, AnnotationHTTPRedirect, "true")
mustCreate(t, fc, ing)
mustCreate(t, fc, service())
expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
opts := configOpts{
replicas: ptr.To[int32](1),
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
app: kubetypes.AppIngressResource,
serveConfig: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {HTTPS: true},
80: {HTTP: true},
},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://1.2.3.4:8080/"},
}},
"${TS_CERT_DOMAIN}:80": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Redirect: "301:https://${HOST}${REQUEST_URI}"},
}},
},
},
}
expectEqual(t, fc, expectedSecret(t, fc, opts))
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"))
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeResourceReqs)
// 2. Update device info to get status updated
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
// Verify Ingress status includes both ports 80 and 443
ing = &networkingv1.Ingress{}
if err := fc.Get(context.Background(), types.NamespacedName{Name: "test", Namespace: "default"}, ing); err != nil {
t.Fatal(err)
}
wantPorts := []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
{Port: 80, Protocol: "TCP"},
}
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress[0].Ports, wantPorts) {
t.Errorf("incorrect status ports: got %v, want %v", ing.Status.LoadBalancer.Ingress[0].Ports, wantPorts)
}
// 3. Remove HTTP redirect annotation
mustUpdate(t, fc, "default", "test", func(ing *networkingv1.Ingress) {
delete(ing.Annotations, AnnotationHTTPRedirect)
})
expectReconciled(t, ingR, "default", "test")
// 4. Verify Ingress status no longer includes port 80
ing = &networkingv1.Ingress{}
if err := fc.Get(context.Background(), types.NamespacedName{Name: "test", Namespace: "default"}, ing); err != nil {
t.Fatal(err)
}
wantPorts = []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
}
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress[0].Ports, wantPorts) {
t.Errorf("incorrect status ports after removing redirect: got %v, want %v", ing.Status.LoadBalancer.Ingress[0].Ports, wantPorts)
}
}

@ -26,6 +26,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml" "sigs.k8s.io/yaml"
tsoperator "tailscale.com/k8s-operator" tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes" "tailscale.com/kube/kubetypes"
@ -45,10 +46,7 @@ const (
messageMultipleDNSConfigsPresent = "Multiple DNSConfig resources found in cluster. Please ensure no more than one is present." messageMultipleDNSConfigsPresent = "Multiple DNSConfig resources found in cluster. Please ensure no more than one is present."
defaultNameserverImageRepo = "tailscale/k8s-nameserver" defaultNameserverImageRepo = "tailscale/k8s-nameserver"
// TODO (irbekrm): once we start publishing nameserver images for stable defaultNameserverImageTag = "stable"
// track, replace 'unstable' here with the version of this operator
// instance.
defaultNameserverImageTag = "unstable"
) )
// NameserverReconciler knows how to create nameserver resources in cluster in // NameserverReconciler knows how to create nameserver resources in cluster in

@ -19,6 +19,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake" "sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/yaml" "sigs.k8s.io/yaml"
operatorutils "tailscale.com/k8s-operator" operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest" "tailscale.com/tstest"
@ -182,7 +183,7 @@ func TestNameserverReconciler(t *testing.T) {
dnsCfg.Spec.Nameserver.Image = nil dnsCfg.Spec.Nameserver.Image = nil
}) })
expectReconciled(t, reconciler, "", "test") expectReconciled(t, reconciler, "", "test")
wantsDeploy.Spec.Template.Spec.Containers[0].Image = "tailscale/k8s-nameserver:unstable" wantsDeploy.Spec.Template.Spec.Containers[0].Image = "tailscale/k8s-nameserver:stable"
expectEqual(t, fc, wantsDeploy) expectEqual(t, fc, wantsDeploy)
}) })
} }

@ -44,10 +44,10 @@ import (
"sigs.k8s.io/controller-runtime/pkg/manager/signals" "sigs.k8s.io/controller-runtime/pkg/manager/signals"
"sigs.k8s.io/controller-runtime/pkg/predicate" "sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/envknob"
"tailscale.com/client/local" "tailscale.com/client/local"
"tailscale.com/client/tailscale" "tailscale.com/client/tailscale"
"tailscale.com/envknob"
"tailscale.com/hostinfo" "tailscale.com/hostinfo"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/store/kubestore" "tailscale.com/ipn/store/kubestore"
@ -164,22 +164,24 @@ func main() {
runReconcilers(rOpts) runReconcilers(rOpts)
} }
// initTSNet initializes the tsnet.Server and logs in to Tailscale. It uses the // initTSNet initializes the tsnet.Server and logs in to Tailscale. If CLIENT_ID
// CLIENT_ID_FILE and CLIENT_SECRET_FILE environment variables to authenticate // is set, it authenticates to the Tailscale API using the federated OIDC workload
// with Tailscale. // identity flow. Otherwise, it uses the CLIENT_ID_FILE and CLIENT_SECRET_FILE
// environment variables to authenticate with static credentials.
func initTSNet(zlog *zap.SugaredLogger, loginServer string) (*tsnet.Server, tsClient) { func initTSNet(zlog *zap.SugaredLogger, loginServer string) (*tsnet.Server, tsClient) {
var ( var (
clientIDPath = defaultEnv("CLIENT_ID_FILE", "") clientID = defaultEnv("CLIENT_ID", "") // Used for workload identity federation.
clientSecretPath = defaultEnv("CLIENT_SECRET_FILE", "") clientIDPath = defaultEnv("CLIENT_ID_FILE", "") // Used for static client credentials.
clientSecretPath = defaultEnv("CLIENT_SECRET_FILE", "") // Used for static client credentials.
hostname = defaultEnv("OPERATOR_HOSTNAME", "tailscale-operator") hostname = defaultEnv("OPERATOR_HOSTNAME", "tailscale-operator")
kubeSecret = defaultEnv("OPERATOR_SECRET", "") kubeSecret = defaultEnv("OPERATOR_SECRET", "")
operatorTags = defaultEnv("OPERATOR_INITIAL_TAGS", "tag:k8s-operator") operatorTags = defaultEnv("OPERATOR_INITIAL_TAGS", "tag:k8s-operator")
) )
startlog := zlog.Named("startup") startlog := zlog.Named("startup")
if clientIDPath == "" || clientSecretPath == "" { if clientID == "" && (clientIDPath == "" || clientSecretPath == "") {
startlog.Fatalf("CLIENT_ID_FILE and CLIENT_SECRET_FILE must be set") startlog.Fatalf("CLIENT_ID_FILE and CLIENT_SECRET_FILE must be set") // TODO(tomhjp): error message can mention WIF once it's publicly available.
} }
tsc, err := newTSClient(context.Background(), clientIDPath, clientSecretPath, loginServer) tsc, err := newTSClient(zlog.Named("ts-api-client"), clientID, clientIDPath, clientSecretPath, loginServer)
if err != nil { if err != nil {
startlog.Fatalf("error creating Tailscale client: %v", err) startlog.Fatalf("error creating Tailscale client: %v", err)
} }
@ -636,7 +638,7 @@ func runReconcilers(opts reconcilerOpts) {
recorder: eventRecorder, recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace, tsNamespace: opts.tailscaleNamespace,
Client: mgr.GetClient(), Client: mgr.GetClient(),
l: opts.log.Named("recorder-reconciler"), log: opts.log.Named("recorder-reconciler"),
clock: tstime.DefaultClock{}, clock: tstime.DefaultClock{},
tsClient: opts.tsClient, tsClient: opts.tsClient,
loginServer: opts.loginServer, loginServer: opts.loginServer,
@ -691,7 +693,7 @@ func runReconcilers(opts reconcilerOpts) {
Complete(&ProxyGroupReconciler{ Complete(&ProxyGroupReconciler{
recorder: eventRecorder, recorder: eventRecorder,
Client: mgr.GetClient(), Client: mgr.GetClient(),
l: opts.log.Named("proxygroup-reconciler"), log: opts.log.Named("proxygroup-reconciler"),
clock: tstime.DefaultClock{}, clock: tstime.DefaultClock{},
tsClient: opts.tsClient, tsClient: opts.tsClient,
@ -1120,7 +1122,7 @@ func serviceHandlerForIngress(cl client.Client, logger *zap.SugaredLogger, ingre
reqs := make([]reconcile.Request, 0) reqs := make([]reconcile.Request, 0)
for _, ing := range ingList.Items { for _, ing := range ingList.Items {
if ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != ingressClassName { if ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != ingressClassName {
return nil continue
} }
if hasProxyGroupAnnotation(&ing) { if hasProxyGroupAnnotation(&ing) {
// We don't want to reconcile backend Services for Ingresses for ProxyGroups. // We don't want to reconcile backend Services for Ingresses for ProxyGroups.

@ -1282,8 +1282,8 @@ func TestServiceProxyClassAnnotation(t *testing.T) {
slist := &corev1.SecretList{} slist := &corev1.SecretList{}
fc.List(context.Background(), slist, client.InNamespace("operator-ns")) fc.List(context.Background(), slist, client.InNamespace("operator-ns"))
for _, i := range slist.Items { for _, i := range slist.Items {
l, _ := json.Marshal(i.Labels) labels, _ := json.Marshal(i.Labels)
t.Logf("found secret %q with labels %q ", i.Name, string(l)) t.Logf("found secret %q with labels %q ", i.Name, string(labels))
} }
_, shortName := findGenName(t, fc, "default", "test", "svc") _, shortName := findGenName(t, fc, "default", "test", "svc")
@ -1698,6 +1698,42 @@ func Test_serviceHandlerForIngress(t *testing.T) {
} }
} }
func Test_serviceHandlerForIngress_multipleIngressClasses(t *testing.T) {
fc := fake.NewFakeClient()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{Name: "backend", Namespace: "default"},
}
mustCreate(t, fc, svc)
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{Name: "nginx-ing", Namespace: "default"},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("nginx"),
DefaultBackend: &networkingv1.IngressBackend{Service: &networkingv1.IngressServiceBackend{Name: "backend"}},
},
})
mustCreate(t, fc, &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{Name: "ts-ing", Namespace: "default"},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{Service: &networkingv1.IngressServiceBackend{Name: "backend"}},
},
})
got := serviceHandlerForIngress(fc, zl.Sugar(), "tailscale")(context.Background(), svc)
want := []reconcile.Request{{NamespacedName: types.NamespacedName{Namespace: "default", Name: "ts-ing"}}}
if diff := cmp.Diff(got, want); diff != "" {
t.Fatalf("unexpected reconcile requests (-got +want):\n%s", diff)
}
}
func Test_clusterDomainFromResolverConf(t *testing.T) { func Test_clusterDomainFromResolverConf(t *testing.T) {
zl, err := zap.NewDevelopment() zl, err := zap.NewDevelopment()
if err != nil { if err != nil {

@ -80,7 +80,7 @@ var (
// ProxyGroupReconciler ensures cluster resources for a ProxyGroup definition. // ProxyGroupReconciler ensures cluster resources for a ProxyGroup definition.
type ProxyGroupReconciler struct { type ProxyGroupReconciler struct {
client.Client client.Client
l *zap.SugaredLogger log *zap.SugaredLogger
recorder record.EventRecorder recorder record.EventRecorder
clock tstime.Clock clock tstime.Clock
tsClient tsClient tsClient tsClient
@ -101,7 +101,7 @@ type ProxyGroupReconciler struct {
} }
func (r *ProxyGroupReconciler) logger(name string) *zap.SugaredLogger { func (r *ProxyGroupReconciler) logger(name string) *zap.SugaredLogger {
return r.l.With("ProxyGroup", name) return r.log.With("ProxyGroup", name)
} }
func (r *ProxyGroupReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) { func (r *ProxyGroupReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) {

@ -524,16 +524,16 @@ func pgSecretLabels(pgName, secretType string) map[string]string {
} }
func pgLabels(pgName string, customLabels map[string]string) map[string]string { func pgLabels(pgName string, customLabels map[string]string) map[string]string {
l := make(map[string]string, len(customLabels)+3) labels := make(map[string]string, len(customLabels)+3)
for k, v := range customLabels { for k, v := range customLabels {
l[k] = v labels[k] = v
} }
l[kubetypes.LabelManaged] = "true" labels[kubetypes.LabelManaged] = "true"
l[LabelParentType] = "proxygroup" labels[LabelParentType] = "proxygroup"
l[LabelParentName] = pgName labels[LabelParentName] = pgName
return l return labels
} }
func pgOwnerReference(owner *tsapi.ProxyGroup) []metav1.OwnerReference { func pgOwnerReference(owner *tsapi.ProxyGroup) []metav1.OwnerReference {

@ -670,7 +670,7 @@ func TestProxyGroupWithStaticEndpoints(t *testing.T) {
t.Logf("created node %q with data", n.name) t.Logf("created node %q with data", n.name)
} }
reconciler.l = zl.Sugar().With("TestName", tt.name).With("Reconcile", i) reconciler.log = zl.Sugar().With("TestName", tt.name).With("Reconcile", i)
pg.Spec.Replicas = r.replicas pg.Spec.Replicas = r.replicas
pc.Spec.StaticEndpoints = r.staticEndpointConfig pc.Spec.StaticEndpoints = r.staticEndpointConfig
@ -784,7 +784,7 @@ func TestProxyGroupWithStaticEndpoints(t *testing.T) {
Client: fc, Client: fc,
tsClient: tsClient, tsClient: tsClient,
recorder: fr, recorder: fr,
l: zl.Sugar().With("TestName", tt.name).With("Reconcile", "cleanup"), log: zl.Sugar().With("TestName", tt.name).With("Reconcile", "cleanup"),
clock: cl, clock: cl,
} }
@ -845,7 +845,7 @@ func TestProxyGroup(t *testing.T) {
Client: fc, Client: fc,
tsClient: tsClient, tsClient: tsClient,
recorder: fr, recorder: fr,
l: zl.Sugar(), log: zl.Sugar(),
clock: cl, clock: cl,
} }
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}} crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
@ -1049,7 +1049,7 @@ func TestProxyGroupTypes(t *testing.T) {
tsNamespace: tsNamespace, tsNamespace: tsNamespace,
tsProxyImage: testProxyImage, tsProxyImage: testProxyImage,
Client: fc, Client: fc,
l: zl.Sugar(), log: zl.Sugar(),
tsClient: &fakeTSClient{}, tsClient: &fakeTSClient{},
clock: tstest.NewClock(tstest.ClockOpts{}), clock: tstest.NewClock(tstest.ClockOpts{}),
} }
@ -1289,24 +1289,24 @@ func TestKubeAPIServerStatusConditionFlow(t *testing.T) {
tsNamespace: tsNamespace, tsNamespace: tsNamespace,
tsProxyImage: testProxyImage, tsProxyImage: testProxyImage,
Client: fc, Client: fc,
l: zap.Must(zap.NewDevelopment()).Sugar(), log: zap.Must(zap.NewDevelopment()).Sugar(),
tsClient: &fakeTSClient{}, tsClient: &fakeTSClient{},
clock: tstest.NewClock(tstest.ClockOpts{}), clock: tstest.NewClock(tstest.ClockOpts{}),
} }
expectReconciled(t, r, "", pg.Name) expectReconciled(t, r, "", pg.Name)
pg.ObjectMeta.Finalizers = append(pg.ObjectMeta.Finalizers, FinalizerName) pg.ObjectMeta.Finalizers = append(pg.ObjectMeta.Finalizers, FinalizerName)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionFalse, reasonProxyGroupCreating, "", 0, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionFalse, reasonProxyGroupCreating, "", 0, r.clock, r.log)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.log)
expectEqual(t, fc, pg, omitPGStatusConditionMessages) expectEqual(t, fc, pg, omitPGStatusConditionMessages)
// Set kube-apiserver valid. // Set kube-apiserver valid.
mustUpdateStatus(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) { mustUpdateStatus(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) {
tsoperator.SetProxyGroupCondition(p, tsapi.KubeAPIServerProxyValid, metav1.ConditionTrue, reasonKubeAPIServerProxyValid, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(p, tsapi.KubeAPIServerProxyValid, metav1.ConditionTrue, reasonKubeAPIServerProxyValid, "", 1, r.clock, r.log)
}) })
expectReconciled(t, r, "", pg.Name) expectReconciled(t, r, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.KubeAPIServerProxyValid, metav1.ConditionTrue, reasonKubeAPIServerProxyValid, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.KubeAPIServerProxyValid, metav1.ConditionTrue, reasonKubeAPIServerProxyValid, "", 1, r.clock, r.log)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.log)
expectEqual(t, fc, pg, omitPGStatusConditionMessages) expectEqual(t, fc, pg, omitPGStatusConditionMessages)
// Set available. // Set available.
@ -1318,17 +1318,17 @@ func TestKubeAPIServerStatusConditionFlow(t *testing.T) {
TailnetIPs: []string{"1.2.3.4", "::1"}, TailnetIPs: []string{"1.2.3.4", "::1"},
}, },
} }
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionTrue, reasonProxyGroupAvailable, "", 0, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupAvailable, metav1.ConditionTrue, reasonProxyGroupAvailable, "", 0, r.clock, r.log)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "", 1, r.clock, r.log)
expectEqual(t, fc, pg, omitPGStatusConditionMessages) expectEqual(t, fc, pg, omitPGStatusConditionMessages)
// Set kube-apiserver configured. // Set kube-apiserver configured.
mustUpdateStatus(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) { mustUpdateStatus(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) {
tsoperator.SetProxyGroupCondition(p, tsapi.KubeAPIServerProxyConfigured, metav1.ConditionTrue, reasonKubeAPIServerProxyConfigured, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(p, tsapi.KubeAPIServerProxyConfigured, metav1.ConditionTrue, reasonKubeAPIServerProxyConfigured, "", 1, r.clock, r.log)
}) })
expectReconciled(t, r, "", pg.Name) expectReconciled(t, r, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.KubeAPIServerProxyConfigured, metav1.ConditionTrue, reasonKubeAPIServerProxyConfigured, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.KubeAPIServerProxyConfigured, metav1.ConditionTrue, reasonKubeAPIServerProxyConfigured, "", 1, r.clock, r.log)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionTrue, reasonProxyGroupReady, "", 1, r.clock, r.l) tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionTrue, reasonProxyGroupReady, "", 1, r.clock, r.log)
expectEqual(t, fc, pg, omitPGStatusConditionMessages) expectEqual(t, fc, pg, omitPGStatusConditionMessages)
} }
@ -1342,7 +1342,7 @@ func TestKubeAPIServerType_DoesNotOverwriteServicesConfig(t *testing.T) {
tsNamespace: tsNamespace, tsNamespace: tsNamespace,
tsProxyImage: testProxyImage, tsProxyImage: testProxyImage,
Client: fc, Client: fc,
l: zap.Must(zap.NewDevelopment()).Sugar(), log: zap.Must(zap.NewDevelopment()).Sugar(),
tsClient: &fakeTSClient{}, tsClient: &fakeTSClient{},
clock: tstest.NewClock(tstest.ClockOpts{}), clock: tstest.NewClock(tstest.ClockOpts{}),
} }
@ -1427,7 +1427,7 @@ func TestIngressAdvertiseServicesConfigPreserved(t *testing.T) {
tsNamespace: tsNamespace, tsNamespace: tsNamespace,
tsProxyImage: testProxyImage, tsProxyImage: testProxyImage,
Client: fc, Client: fc,
l: zap.Must(zap.NewDevelopment()).Sugar(), log: zap.Must(zap.NewDevelopment()).Sugar(),
tsClient: &fakeTSClient{}, tsClient: &fakeTSClient{},
clock: tstest.NewClock(tstest.ClockOpts{}), clock: tstest.NewClock(tstest.ClockOpts{}),
} }
@ -1902,7 +1902,7 @@ func TestProxyGroupLetsEncryptStaging(t *testing.T) {
defaultProxyClass: tt.defaultProxyClass, defaultProxyClass: tt.defaultProxyClass,
Client: fc, Client: fc,
tsClient: &fakeTSClient{}, tsClient: &fakeTSClient{},
l: zl.Sugar(), log: zl.Sugar(),
clock: cl, clock: cl,
} }

@ -70,6 +70,7 @@ const (
// Annotations settable by users on ingresses. // Annotations settable by users on ingresses.
AnnotationFunnel = "tailscale.com/funnel" AnnotationFunnel = "tailscale.com/funnel"
AnnotationHTTPRedirect = "tailscale.com/http-redirect"
// If set to true, set up iptables/nftables rules in the proxy forward // If set to true, set up iptables/nftables rules in the proxy forward
// cluster traffic to the tailnet IP of that proxy. This can only be set // cluster traffic to the tailnet IP of that proxy. This can only be set

@ -207,11 +207,6 @@ func (r *HAServiceReconciler) maybeProvision(ctx context.Context, hostname strin
// already created and not owned by this Service. // already created and not owned by this Service.
serviceName := tailcfg.ServiceName("svc:" + hostname) serviceName := tailcfg.ServiceName("svc:" + hostname)
existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName) existingTSSvc, err := r.tsClient.GetVIPService(ctx, serviceName)
if isErrorFeatureFlagNotEnabled(err) {
logger.Warn(msgFeatureFlagNotEnabled)
r.recorder.Event(svc, corev1.EventTypeWarning, warningTailscaleServiceFeatureFlagNotEnabled, msgFeatureFlagNotEnabled)
return false, nil
}
if err != nil && !isErrorTailscaleServiceNotFound(err) { if err != nil && !isErrorTailscaleServiceNotFound(err) {
return false, fmt.Errorf("error getting Tailscale Service %q: %w", hostname, err) return false, fmt.Errorf("error getting Tailscale Service %q: %w", hostname, err)
} }
@ -530,11 +525,6 @@ func (r *HAServiceReconciler) tailnetCertDomain(ctx context.Context) (string, er
// It returns true if an existing Tailscale Service was updated to remove owner reference, as well as any error that occurred. // It returns true if an existing Tailscale Service was updated to remove owner reference, as well as any error that occurred.
func cleanupTailscaleService(ctx context.Context, tsClient tsClient, name tailcfg.ServiceName, operatorID string, logger *zap.SugaredLogger) (updated bool, err error) { func cleanupTailscaleService(ctx context.Context, tsClient tsClient, name tailcfg.ServiceName, operatorID string, logger *zap.SugaredLogger) (updated bool, err error) {
svc, err := tsClient.GetVIPService(ctx, name) svc, err := tsClient.GetVIPService(ctx, name)
if isErrorFeatureFlagNotEnabled(err) {
msg := fmt.Sprintf("Unable to proceed with cleanup: %s.", msgFeatureFlagNotEnabled)
logger.Warn(msg)
return false, nil
}
if err != nil { if err != nil {
errResp := &tailscale.ErrResponse{} errResp := &tailscale.ErrResponse{}
ok := errors.As(err, errResp) ok := errors.As(err, errResp)

@ -8,8 +8,13 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"net/http"
"os" "os"
"sync"
"time"
"go.uber.org/zap"
"golang.org/x/oauth2"
"golang.org/x/oauth2/clientcredentials" "golang.org/x/oauth2/clientcredentials"
"tailscale.com/internal/client/tailscale" "tailscale.com/internal/client/tailscale"
"tailscale.com/ipn" "tailscale.com/ipn"
@ -20,30 +25,53 @@ import (
// call should be performed on the default tailnet for the provided credentials. // call should be performed on the default tailnet for the provided credentials.
const ( const (
defaultTailnet = "-" defaultTailnet = "-"
oidcJWTPath = "/var/run/secrets/tailscale/serviceaccount/token"
) )
func newTSClient(ctx context.Context, clientIDPath, clientSecretPath, loginServer string) (tsClient, error) { func newTSClient(logger *zap.SugaredLogger, clientID, clientIDPath, clientSecretPath, loginServer string) (*tailscale.Client, error) {
clientID, err := os.ReadFile(clientIDPath) baseURL := ipn.DefaultControlURL
if loginServer != "" {
baseURL = loginServer
}
var httpClient *http.Client
if clientID == "" {
// Use static client credentials mounted to disk.
id, err := os.ReadFile(clientIDPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("error reading client ID %q: %w", clientIDPath, err) return nil, fmt.Errorf("error reading client ID %q: %w", clientIDPath, err)
} }
clientSecret, err := os.ReadFile(clientSecretPath) secret, err := os.ReadFile(clientSecretPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("reading client secret %q: %w", clientSecretPath, err) return nil, fmt.Errorf("reading client secret %q: %w", clientSecretPath, err)
} }
const tokenURLPath = "/api/v2/oauth/token"
tokenURL := fmt.Sprintf("%s%s", ipn.DefaultControlURL, tokenURLPath)
if loginServer != "" {
tokenURL = fmt.Sprintf("%s%s", loginServer, tokenURLPath)
}
credentials := clientcredentials.Config{ credentials := clientcredentials.Config{
ClientID: string(clientID), ClientID: string(id),
ClientSecret: string(clientSecret), ClientSecret: string(secret),
TokenURL: tokenURL, TokenURL: fmt.Sprintf("%s%s", baseURL, "/api/v2/oauth/token"),
}
tokenSrc := credentials.TokenSource(context.Background())
httpClient = oauth2.NewClient(context.Background(), tokenSrc)
} else {
// Use workload identity federation.
tokenSrc := &jwtTokenSource{
logger: logger,
jwtPath: oidcJWTPath,
baseCfg: clientcredentials.Config{
ClientID: clientID,
TokenURL: fmt.Sprintf("%s%s", baseURL, "/api/v2/oauth/token-exchange"),
},
} }
httpClient = &http.Client{
Transport: &oauth2.Transport{
Source: tokenSrc,
},
}
}
c := tailscale.NewClient(defaultTailnet, nil) c := tailscale.NewClient(defaultTailnet, nil)
c.UserAgent = "tailscale-k8s-operator" c.UserAgent = "tailscale-k8s-operator"
c.HTTPClient = credentials.Client(ctx) c.HTTPClient = httpClient
if loginServer != "" { if loginServer != "" {
c.BaseURL = loginServer c.BaseURL = loginServer
} }
@ -63,3 +91,43 @@ type tsClient interface {
// DeleteVIPService is a method for deleting a Tailscale Service. // DeleteVIPService is a method for deleting a Tailscale Service.
DeleteVIPService(ctx context.Context, name tailcfg.ServiceName) error DeleteVIPService(ctx context.Context, name tailcfg.ServiceName) error
} }
// jwtTokenSource implements the [oauth2.TokenSource] interface, but with the
// ability to regenerate a fresh underlying token source each time a new value
// of the JWT parameter is needed due to expiration.
type jwtTokenSource struct {
logger *zap.SugaredLogger
jwtPath string // Path to the file containing an automatically refreshed JWT.
baseCfg clientcredentials.Config // Holds config that doesn't change for the lifetime of the process.
mu sync.Mutex // Guards underlying.
underlying oauth2.TokenSource // The oauth2 client implementation. Does its own separate caching of the access token.
}
func (s *jwtTokenSource) Token() (*oauth2.Token, error) {
s.mu.Lock()
defer s.mu.Unlock()
if s.underlying != nil {
t, err := s.underlying.Token()
if err == nil && t != nil && t.Valid() {
return t, nil
}
}
s.logger.Debugf("Refreshing JWT from %s", s.jwtPath)
tk, err := os.ReadFile(s.jwtPath)
if err != nil {
return nil, fmt.Errorf("error reading JWT from %q: %w", s.jwtPath, err)
}
// Shallow copy of the base config.
credentials := s.baseCfg
credentials.EndpointParams = map[string][]string{
"jwt": {string(tk)},
}
src := credentials.TokenSource(context.Background())
s.underlying = oauth2.ReuseTokenSourceWithExpiry(nil, src, time.Minute)
return s.underlying.Token()
}

@ -0,0 +1,135 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"testing"
"go.uber.org/zap"
"golang.org/x/oauth2"
)
func TestNewStaticClient(t *testing.T) {
const (
clientIDFile = "client-id"
clientSecretFile = "client-secret"
)
tmp := t.TempDir()
clientIDPath := filepath.Join(tmp, clientIDFile)
if err := os.WriteFile(clientIDPath, []byte("test-client-id"), 0600); err != nil {
t.Fatalf("error writing test file %q: %v", clientIDPath, err)
}
clientSecretPath := filepath.Join(tmp, clientSecretFile)
if err := os.WriteFile(clientSecretPath, []byte("test-client-secret"), 0600); err != nil {
t.Fatalf("error writing test file %q: %v", clientSecretPath, err)
}
srv := testAPI(t, 3600)
cl, err := newTSClient(zap.NewNop().Sugar(), "", clientIDPath, clientSecretPath, srv.URL)
if err != nil {
t.Fatalf("error creating Tailscale client: %v", err)
}
resp, err := cl.HTTPClient.Get(srv.URL)
if err != nil {
t.Fatalf("error making test API call: %v", err)
}
defer resp.Body.Close()
got, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("error reading response body: %v", err)
}
want := "Bearer " + testToken("/api/v2/oauth/token", "test-client-id", "test-client-secret", "")
if string(got) != want {
t.Errorf("got %q; want %q", got, want)
}
}
func TestNewWorkloadIdentityClient(t *testing.T) {
// 5 seconds is within expiryDelta leeway, so the access token will
// immediately be considered expired and get refreshed on each access.
srv := testAPI(t, 5)
cl, err := newTSClient(zap.NewNop().Sugar(), "test-client-id", "", "", srv.URL)
if err != nil {
t.Fatalf("error creating Tailscale client: %v", err)
}
// Modify the path where the JWT will be read from.
oauth2Transport, ok := cl.HTTPClient.Transport.(*oauth2.Transport)
if !ok {
t.Fatalf("expected oauth2.Transport, got %T", cl.HTTPClient.Transport)
}
jwtTokenSource, ok := oauth2Transport.Source.(*jwtTokenSource)
if !ok {
t.Fatalf("expected jwtTokenSource, got %T", oauth2Transport.Source)
}
tmp := t.TempDir()
jwtPath := filepath.Join(tmp, "token")
jwtTokenSource.jwtPath = jwtPath
for _, jwt := range []string{"test-jwt", "updated-test-jwt"} {
if err := os.WriteFile(jwtPath, []byte(jwt), 0600); err != nil {
t.Fatalf("error writing test file %q: %v", jwtPath, err)
}
resp, err := cl.HTTPClient.Get(srv.URL)
if err != nil {
t.Fatalf("error making test API call: %v", err)
}
defer resp.Body.Close()
got, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("error reading response body: %v", err)
}
if want := "Bearer " + testToken("/api/v2/oauth/token-exchange", "test-client-id", "", jwt); string(got) != want {
t.Errorf("got %q; want %q", got, want)
}
}
}
func testAPI(t *testing.T, expirationSeconds int) *httptest.Server {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Logf("test server got request: %s %s", r.Method, r.URL.Path)
switch r.URL.Path {
case "/api/v2/oauth/token", "/api/v2/oauth/token-exchange":
id, secret, ok := r.BasicAuth()
if !ok {
t.Fatal("missing or invalid basic auth")
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(map[string]any{
"access_token": testToken(r.URL.Path, id, secret, r.FormValue("jwt")),
"token_type": "Bearer",
"expires_in": expirationSeconds,
}); err != nil {
t.Fatalf("error writing response: %v", err)
}
case "/":
// Echo back the authz header for test assertions.
_, err := w.Write([]byte(r.Header.Get("Authorization")))
if err != nil {
t.Fatalf("error writing response: %v", err)
}
default:
w.WriteHeader(http.StatusNotFound)
}
}))
t.Cleanup(srv.Close)
return srv
}
func testToken(path, id, secret, jwt string) string {
return fmt.Sprintf("%s|%s|%s|%s", path, id, secret, jwt)
}

@ -12,6 +12,7 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"slices" "slices"
"strconv"
"strings" "strings"
"sync" "sync"
@ -29,6 +30,7 @@ import (
"k8s.io/client-go/tools/record" "k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/client/tailscale" "tailscale.com/client/tailscale"
tsoperator "tailscale.com/k8s-operator" tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
@ -54,7 +56,7 @@ var gaugeRecorderResources = clientmetric.NewGauge(kubetypes.MetricRecorderCount
// Recorder CRs. // Recorder CRs.
type RecorderReconciler struct { type RecorderReconciler struct {
client.Client client.Client
l *zap.SugaredLogger log *zap.SugaredLogger
recorder record.EventRecorder recorder record.EventRecorder
clock tstime.Clock clock tstime.Clock
tsNamespace string tsNamespace string
@ -66,16 +68,16 @@ type RecorderReconciler struct {
} }
func (r *RecorderReconciler) logger(name string) *zap.SugaredLogger { func (r *RecorderReconciler) logger(name string) *zap.SugaredLogger {
return r.l.With("Recorder", name) return r.log.With("Recorder", name)
} }
func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) { func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) {
logger := r.logger(req.Name) logger := r.logger(req.Name)
logger.Debugf("starting reconcile") logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished") defer logger.Debugf("reconcile finished")
tsr := new(tsapi.Recorder) tsr := new(tsapi.Recorder)
err = r.Get(ctx, req.NamespacedName, tsr) err := r.Get(ctx, req.NamespacedName, tsr)
if apierrors.IsNotFound(err) { if apierrors.IsNotFound(err) {
logger.Debugf("Recorder not found, assuming it was deleted") logger.Debugf("Recorder not found, assuming it was deleted")
return reconcile.Result{}, nil return reconcile.Result{}, nil
@ -98,7 +100,7 @@ func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Reques
} }
tsr.Finalizers = slices.Delete(tsr.Finalizers, ix, ix+1) tsr.Finalizers = slices.Delete(tsr.Finalizers, ix, ix+1)
if err := r.Update(ctx, tsr); err != nil { if err = r.Update(ctx, tsr); err != nil {
return reconcile.Result{}, err return reconcile.Result{}, err
} }
return reconcile.Result{}, nil return reconcile.Result{}, nil
@ -110,10 +112,11 @@ func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Reques
if !apiequality.Semantic.DeepEqual(oldTSRStatus, &tsr.Status) { if !apiequality.Semantic.DeepEqual(oldTSRStatus, &tsr.Status) {
// An error encountered here should get returned by the Reconcile function. // An error encountered here should get returned by the Reconcile function.
if updateErr := r.Client.Status().Update(ctx, tsr); updateErr != nil { if updateErr := r.Client.Status().Update(ctx, tsr); updateErr != nil {
err = errors.Join(err, updateErr) return reconcile.Result{}, errors.Join(err, updateErr)
} }
} }
return reconcile.Result{}, err
return reconcile.Result{}, nil
} }
if !slices.Contains(tsr.Finalizers, FinalizerName) { if !slices.Contains(tsr.Finalizers, FinalizerName) {
@ -123,12 +126,12 @@ func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Reques
// operation is underway. // operation is underway.
logger.Infof("ensuring Recorder is set up") logger.Infof("ensuring Recorder is set up")
tsr.Finalizers = append(tsr.Finalizers, FinalizerName) tsr.Finalizers = append(tsr.Finalizers, FinalizerName)
if err := r.Update(ctx, tsr); err != nil { if err = r.Update(ctx, tsr); err != nil {
return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderCreationFailed, reasonRecorderCreationFailed) return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderCreationFailed, reasonRecorderCreationFailed)
} }
} }
if err := r.validate(ctx, tsr); err != nil { if err = r.validate(ctx, tsr); err != nil {
message := fmt.Sprintf("Recorder is invalid: %s", err) message := fmt.Sprintf("Recorder is invalid: %s", err)
r.recorder.Eventf(tsr, corev1.EventTypeWarning, reasonRecorderInvalid, message) r.recorder.Eventf(tsr, corev1.EventTypeWarning, reasonRecorderInvalid, message)
return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderInvalid, message) return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderInvalid, message)
@ -160,19 +163,29 @@ func (r *RecorderReconciler) maybeProvision(ctx context.Context, tsr *tsapi.Reco
gaugeRecorderResources.Set(int64(r.recorders.Len())) gaugeRecorderResources.Set(int64(r.recorders.Len()))
r.mu.Unlock() r.mu.Unlock()
if err := r.ensureAuthSecretCreated(ctx, tsr); err != nil { if err := r.ensureAuthSecretsCreated(ctx, tsr); err != nil {
return fmt.Errorf("error creating secrets: %w", err) return fmt.Errorf("error creating secrets: %w", err)
} }
// State Secret is precreated so we can use the Recorder CR as its owner ref.
sec := tsrStateSecret(tsr, r.tsNamespace) // State Secrets are pre-created so we can use the Recorder CR as its owner ref.
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, sec, func(s *corev1.Secret) { var replicas int32 = 1
if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
for replica := range replicas {
sec := tsrStateSecret(tsr, r.tsNamespace, replica)
_, err := createOrUpdate(ctx, r.Client, r.tsNamespace, sec, func(s *corev1.Secret) {
s.ObjectMeta.Labels = sec.ObjectMeta.Labels s.ObjectMeta.Labels = sec.ObjectMeta.Labels
s.ObjectMeta.Annotations = sec.ObjectMeta.Annotations s.ObjectMeta.Annotations = sec.ObjectMeta.Annotations
}); err != nil { })
return fmt.Errorf("error creating state Secret: %w", err) if err != nil {
return fmt.Errorf("error creating state Secret %q: %w", sec.Name, err)
}
} }
sa := tsrServiceAccount(tsr, r.tsNamespace) sa := tsrServiceAccount(tsr, r.tsNamespace)
if _, err := createOrMaybeUpdate(ctx, r.Client, r.tsNamespace, sa, func(s *corev1.ServiceAccount) error { _, err := createOrMaybeUpdate(ctx, r.Client, r.tsNamespace, sa, func(s *corev1.ServiceAccount) error {
// Perform this check within the update function to make sure we don't // Perform this check within the update function to make sure we don't
// have a race condition between the previous check and the update. // have a race condition between the previous check and the update.
if err := saOwnedByRecorder(s, tsr); err != nil { if err := saOwnedByRecorder(s, tsr); err != nil {
@ -183,54 +196,68 @@ func (r *RecorderReconciler) maybeProvision(ctx context.Context, tsr *tsapi.Reco
s.ObjectMeta.Annotations = sa.ObjectMeta.Annotations s.ObjectMeta.Annotations = sa.ObjectMeta.Annotations
return nil return nil
}); err != nil { })
if err != nil {
return fmt.Errorf("error creating ServiceAccount: %w", err) return fmt.Errorf("error creating ServiceAccount: %w", err)
} }
role := tsrRole(tsr, r.tsNamespace) role := tsrRole(tsr, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, role, func(r *rbacv1.Role) { _, err = createOrUpdate(ctx, r.Client, r.tsNamespace, role, func(r *rbacv1.Role) {
r.ObjectMeta.Labels = role.ObjectMeta.Labels r.ObjectMeta.Labels = role.ObjectMeta.Labels
r.ObjectMeta.Annotations = role.ObjectMeta.Annotations r.ObjectMeta.Annotations = role.ObjectMeta.Annotations
r.Rules = role.Rules r.Rules = role.Rules
}); err != nil { })
if err != nil {
return fmt.Errorf("error creating Role: %w", err) return fmt.Errorf("error creating Role: %w", err)
} }
roleBinding := tsrRoleBinding(tsr, r.tsNamespace) roleBinding := tsrRoleBinding(tsr, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, roleBinding, func(r *rbacv1.RoleBinding) { _, err = createOrUpdate(ctx, r.Client, r.tsNamespace, roleBinding, func(r *rbacv1.RoleBinding) {
r.ObjectMeta.Labels = roleBinding.ObjectMeta.Labels r.ObjectMeta.Labels = roleBinding.ObjectMeta.Labels
r.ObjectMeta.Annotations = roleBinding.ObjectMeta.Annotations r.ObjectMeta.Annotations = roleBinding.ObjectMeta.Annotations
r.RoleRef = roleBinding.RoleRef r.RoleRef = roleBinding.RoleRef
r.Subjects = roleBinding.Subjects r.Subjects = roleBinding.Subjects
}); err != nil { })
if err != nil {
return fmt.Errorf("error creating RoleBinding: %w", err) return fmt.Errorf("error creating RoleBinding: %w", err)
} }
ss := tsrStatefulSet(tsr, r.tsNamespace, r.loginServer) ss := tsrStatefulSet(tsr, r.tsNamespace, r.loginServer)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, ss, func(s *appsv1.StatefulSet) { _, err = createOrUpdate(ctx, r.Client, r.tsNamespace, ss, func(s *appsv1.StatefulSet) {
s.ObjectMeta.Labels = ss.ObjectMeta.Labels s.ObjectMeta.Labels = ss.ObjectMeta.Labels
s.ObjectMeta.Annotations = ss.ObjectMeta.Annotations s.ObjectMeta.Annotations = ss.ObjectMeta.Annotations
s.Spec = ss.Spec s.Spec = ss.Spec
}); err != nil { })
if err != nil {
return fmt.Errorf("error creating StatefulSet: %w", err) return fmt.Errorf("error creating StatefulSet: %w", err)
} }
// ServiceAccount name may have changed, in which case we need to clean up // ServiceAccount name may have changed, in which case we need to clean up
// the previous ServiceAccount. RoleBinding will already be updated to point // the previous ServiceAccount. RoleBinding will already be updated to point
// to the new ServiceAccount. // to the new ServiceAccount.
if err := r.maybeCleanupServiceAccounts(ctx, tsr, sa.Name); err != nil { if err = r.maybeCleanupServiceAccounts(ctx, tsr, sa.Name); err != nil {
return fmt.Errorf("error cleaning up ServiceAccounts: %w", err) return fmt.Errorf("error cleaning up ServiceAccounts: %w", err)
} }
var devices []tsapi.RecorderTailnetDevice // If we have scaled the recorder down, we will have dangling state secrets
// that we need to clean up.
if err = r.maybeCleanupSecrets(ctx, tsr); err != nil {
return fmt.Errorf("error cleaning up Secrets: %w", err)
}
device, ok, err := r.getDeviceInfo(ctx, tsr.Name) var devices []tsapi.RecorderTailnetDevice
if err != nil { for replica := range replicas {
dev, ok, err := r.getDeviceInfo(ctx, tsr.Name, replica)
switch {
case err != nil:
return fmt.Errorf("failed to get device info: %w", err) return fmt.Errorf("failed to get device info: %w", err)
} case !ok:
if !ok {
logger.Debugf("no Tailscale hostname known yet, waiting for Recorder pod to finish auth") logger.Debugf("no Tailscale hostname known yet, waiting for Recorder pod to finish auth")
return nil continue
} }
devices = append(devices, device) devices = append(devices, dev)
}
tsr.Status.Devices = devices tsr.Status.Devices = devices
@ -257,22 +284,89 @@ func saOwnedByRecorder(sa *corev1.ServiceAccount, tsr *tsapi.Recorder) error {
func (r *RecorderReconciler) maybeCleanupServiceAccounts(ctx context.Context, tsr *tsapi.Recorder, currentName string) error { func (r *RecorderReconciler) maybeCleanupServiceAccounts(ctx context.Context, tsr *tsapi.Recorder, currentName string) error {
logger := r.logger(tsr.Name) logger := r.logger(tsr.Name)
// List all ServiceAccounts owned by this Recorder. options := []client.ListOption{
client.InNamespace(r.tsNamespace),
client.MatchingLabels(tsrLabels("recorder", tsr.Name, nil)),
}
sas := &corev1.ServiceAccountList{} sas := &corev1.ServiceAccountList{}
if err := r.List(ctx, sas, client.InNamespace(r.tsNamespace), client.MatchingLabels(labels("recorder", tsr.Name, nil))); err != nil { if err := r.List(ctx, sas, options...); err != nil {
return fmt.Errorf("error listing ServiceAccounts for cleanup: %w", err) return fmt.Errorf("error listing ServiceAccounts for cleanup: %w", err)
} }
for _, sa := range sas.Items {
if sa.Name == currentName { for _, serviceAccount := range sas.Items {
if serviceAccount.Name == currentName {
continue continue
} }
if err := r.Delete(ctx, &sa); err != nil {
if apierrors.IsNotFound(err) { err := r.Delete(ctx, &serviceAccount)
logger.Debugf("ServiceAccount %s not found, likely already deleted", sa.Name) switch {
} else { case apierrors.IsNotFound(err):
return fmt.Errorf("error deleting ServiceAccount %s: %w", sa.Name, err) logger.Debugf("ServiceAccount %s not found, likely already deleted", serviceAccount.Name)
continue
case err != nil:
return fmt.Errorf("error deleting ServiceAccount %s: %w", serviceAccount.Name, err)
} }
} }
return nil
}
func (r *RecorderReconciler) maybeCleanupSecrets(ctx context.Context, tsr *tsapi.Recorder) error {
options := []client.ListOption{
client.InNamespace(r.tsNamespace),
client.MatchingLabels(tsrLabels("recorder", tsr.Name, nil)),
}
secrets := &corev1.SecretList{}
if err := r.List(ctx, secrets, options...); err != nil {
return fmt.Errorf("error listing Secrets for cleanup: %w", err)
}
// Get the largest ordinal suffix that we expect. Then we'll go through the list of secrets owned by this
// recorder and remove them.
var replicas int32 = 1
if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
for _, secret := range secrets.Items {
parts := strings.Split(secret.Name, "-")
if len(parts) == 0 {
continue
}
ordinal, err := strconv.ParseUint(parts[len(parts)-1], 10, 32)
if err != nil {
return fmt.Errorf("error parsing secret name %q: %w", secret.Name, err)
}
if int32(ordinal) < replicas {
continue
}
devicePrefs, ok, err := getDevicePrefs(&secret)
if err != nil {
return err
}
if ok {
var errResp *tailscale.ErrResponse
r.log.Debugf("deleting device %s", devicePrefs.Config.NodeID)
err = r.tsClient.DeleteDevice(ctx, string(devicePrefs.Config.NodeID))
switch {
case errors.As(err, &errResp) && errResp.Status == http.StatusNotFound:
// This device has possibly already been deleted in the admin console. So we can ignore this
// and move on to removing the secret.
case err != nil:
return err
}
}
if err = r.Delete(ctx, &secret); err != nil {
return err
}
} }
return nil return nil
@ -284,12 +378,18 @@ func (r *RecorderReconciler) maybeCleanupServiceAccounts(ctx context.Context, ts
func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Recorder) (bool, error) { func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Recorder) (bool, error) {
logger := r.logger(tsr.Name) logger := r.logger(tsr.Name)
prefs, ok, err := r.getDevicePrefs(ctx, tsr.Name) var replicas int32 = 1
if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
for replica := range replicas {
devicePrefs, ok, err := r.getDevicePrefs(ctx, tsr.Name, replica)
if err != nil { if err != nil {
return false, err return false, err
} }
if !ok { if !ok {
logger.Debugf("state Secret %s-0 not found or does not contain node ID, continuing cleanup", tsr.Name) logger.Debugf("state Secret %s-%d not found or does not contain node ID, continuing cleanup", tsr.Name, replica)
r.mu.Lock() r.mu.Lock()
r.recorders.Remove(tsr.UID) r.recorders.Remove(tsr.UID)
gaugeRecorderResources.Set(int64(r.recorders.Len())) gaugeRecorderResources.Set(int64(r.recorders.Len()))
@ -297,17 +397,19 @@ func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Record
return true, nil return true, nil
} }
id := string(prefs.Config.NodeID) nodeID := string(devicePrefs.Config.NodeID)
logger.Debugf("deleting device %s from control", string(id)) logger.Debugf("deleting device %s from control", nodeID)
if err := r.tsClient.DeleteDevice(ctx, string(id)); err != nil { if err = r.tsClient.DeleteDevice(ctx, nodeID); err != nil {
errResp := &tailscale.ErrResponse{} errResp := &tailscale.ErrResponse{}
if ok := errors.As(err, errResp); ok && errResp.Status == http.StatusNotFound { if errors.As(err, errResp) && errResp.Status == http.StatusNotFound {
logger.Debugf("device %s not found, likely because it has already been deleted from control", string(id)) logger.Debugf("device %s not found, likely because it has already been deleted from control", nodeID)
} else { continue
}
return false, fmt.Errorf("error deleting device: %w", err) return false, fmt.Errorf("error deleting device: %w", err)
} }
} else {
logger.Debugf("device %s deleted from control", string(id)) logger.Debugf("device %s deleted from control", nodeID)
} }
// Unlike most log entries in the reconcile loop, this will get printed // Unlike most log entries in the reconcile loop, this will get printed
@ -319,39 +421,47 @@ func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Record
r.recorders.Remove(tsr.UID) r.recorders.Remove(tsr.UID)
gaugeRecorderResources.Set(int64(r.recorders.Len())) gaugeRecorderResources.Set(int64(r.recorders.Len()))
r.mu.Unlock() r.mu.Unlock()
return true, nil return true, nil
} }
func (r *RecorderReconciler) ensureAuthSecretCreated(ctx context.Context, tsr *tsapi.Recorder) error { func (r *RecorderReconciler) ensureAuthSecretsCreated(ctx context.Context, tsr *tsapi.Recorder) error {
logger := r.logger(tsr.Name) var replicas int32 = 1
key := types.NamespacedName{ if tsr.Spec.Replicas != nil {
Namespace: r.tsNamespace, replicas = *tsr.Spec.Replicas
Name: tsr.Name,
}
if err := r.Get(ctx, key, &corev1.Secret{}); err == nil {
// No updates, already created the auth key.
logger.Debugf("auth Secret %s already exists", key.Name)
return nil
} else if !apierrors.IsNotFound(err) {
return err
} }
// Create the auth key Secret which is going to be used by the StatefulSet
// to authenticate with Tailscale.
logger.Debugf("creating authkey for new Recorder")
tags := tsr.Spec.Tags tags := tsr.Spec.Tags
if len(tags) == 0 { if len(tags) == 0 {
tags = tsapi.Tags{"tag:k8s"} tags = tsapi.Tags{"tag:k8s"}
} }
logger := r.logger(tsr.Name)
for replica := range replicas {
key := types.NamespacedName{
Namespace: r.tsNamespace,
Name: fmt.Sprintf("%s-auth-%d", tsr.Name, replica),
}
err := r.Get(ctx, key, &corev1.Secret{})
switch {
case err == nil:
logger.Debugf("auth Secret %q already exists", key.Name)
continue
case !apierrors.IsNotFound(err):
return fmt.Errorf("failed to get Secret %q: %w", key.Name, err)
}
authKey, err := newAuthKey(ctx, r.tsClient, tags.Stringify()) authKey, err := newAuthKey(ctx, r.tsClient, tags.Stringify())
if err != nil { if err != nil {
return err return err
} }
logger.Debug("creating a new Secret for the Recorder") if err = r.Create(ctx, tsrAuthSecret(tsr, r.tsNamespace, authKey, replica)); err != nil {
if err := r.Create(ctx, tsrAuthSecret(tsr, r.tsNamespace, authKey)); err != nil {
return err return err
} }
}
return nil return nil
} }
@ -361,6 +471,10 @@ func (r *RecorderReconciler) validate(ctx context.Context, tsr *tsapi.Recorder)
return errors.New("must either enable UI or use S3 storage to ensure recordings are accessible") return errors.New("must either enable UI or use S3 storage to ensure recordings are accessible")
} }
if tsr.Spec.Replicas != nil && *tsr.Spec.Replicas > 1 && tsr.Spec.Storage.S3 == nil {
return errors.New("must use S3 storage when using multiple replicas to ensure recordings are accessible")
}
// Check any custom ServiceAccount config doesn't conflict with pre-existing // Check any custom ServiceAccount config doesn't conflict with pre-existing
// ServiceAccounts. This check is performed once during validation to ensure // ServiceAccounts. This check is performed once during validation to ensure
// errors are raised early, but also again during any Updates to prevent a race. // errors are raised early, but also again during any Updates to prevent a race.
@ -394,11 +508,11 @@ func (r *RecorderReconciler) validate(ctx context.Context, tsr *tsapi.Recorder)
return nil return nil
} }
func (r *RecorderReconciler) getStateSecret(ctx context.Context, tsrName string) (*corev1.Secret, error) { func (r *RecorderReconciler) getStateSecret(ctx context.Context, tsrName string, replica int32) (*corev1.Secret, error) {
secret := &corev1.Secret{ secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Namespace: r.tsNamespace, Namespace: r.tsNamespace,
Name: fmt.Sprintf("%s-0", tsrName), Name: fmt.Sprintf("%s-%d", tsrName, replica),
}, },
} }
if err := r.Get(ctx, client.ObjectKeyFromObject(secret), secret); err != nil { if err := r.Get(ctx, client.ObjectKeyFromObject(secret), secret); err != nil {
@ -412,8 +526,8 @@ func (r *RecorderReconciler) getStateSecret(ctx context.Context, tsrName string)
return secret, nil return secret, nil
} }
func (r *RecorderReconciler) getDevicePrefs(ctx context.Context, tsrName string) (prefs prefs, ok bool, err error) { func (r *RecorderReconciler) getDevicePrefs(ctx context.Context, tsrName string, replica int32) (prefs prefs, ok bool, err error) {
secret, err := r.getStateSecret(ctx, tsrName) secret, err := r.getStateSecret(ctx, tsrName, replica)
if err != nil || secret == nil { if err != nil || secret == nil {
return prefs, false, err return prefs, false, err
} }
@ -441,8 +555,8 @@ func getDevicePrefs(secret *corev1.Secret) (prefs prefs, ok bool, err error) {
return prefs, ok, nil return prefs, ok, nil
} }
func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string) (d tsapi.RecorderTailnetDevice, ok bool, err error) { func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string, replica int32) (d tsapi.RecorderTailnetDevice, ok bool, err error) {
secret, err := r.getStateSecret(ctx, tsrName) secret, err := r.getStateSecret(ctx, tsrName, replica)
if err != nil || secret == nil { if err != nil || secret == nil {
return tsapi.RecorderTailnetDevice{}, false, err return tsapi.RecorderTailnetDevice{}, false, err
} }

@ -12,30 +12,36 @@ import (
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1" rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/types/ptr" "tailscale.com/types/ptr"
"tailscale.com/version" "tailscale.com/version"
) )
func tsrStatefulSet(tsr *tsapi.Recorder, namespace string, loginServer string) *appsv1.StatefulSet { func tsrStatefulSet(tsr *tsapi.Recorder, namespace string, loginServer string) *appsv1.StatefulSet {
return &appsv1.StatefulSet{ var replicas int32 = 1
if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
ss := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: tsr.Name, Name: tsr.Name,
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, tsr.Spec.StatefulSet.Labels), Labels: tsrLabels("recorder", tsr.Name, tsr.Spec.StatefulSet.Labels),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
Annotations: tsr.Spec.StatefulSet.Annotations, Annotations: tsr.Spec.StatefulSet.Annotations,
}, },
Spec: appsv1.StatefulSetSpec{ Spec: appsv1.StatefulSetSpec{
Replicas: ptr.To[int32](1), Replicas: ptr.To(replicas),
Selector: &metav1.LabelSelector{ Selector: &metav1.LabelSelector{
MatchLabels: labels("recorder", tsr.Name, tsr.Spec.StatefulSet.Pod.Labels), MatchLabels: tsrLabels("recorder", tsr.Name, tsr.Spec.StatefulSet.Pod.Labels),
}, },
Template: corev1.PodTemplateSpec{ Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: tsr.Name, Name: tsr.Name,
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, tsr.Spec.StatefulSet.Pod.Labels), Labels: tsrLabels("recorder", tsr.Name, tsr.Spec.StatefulSet.Pod.Labels),
Annotations: tsr.Spec.StatefulSet.Pod.Annotations, Annotations: tsr.Spec.StatefulSet.Pod.Annotations,
}, },
Spec: corev1.PodSpec{ Spec: corev1.PodSpec{
@ -59,7 +65,7 @@ func tsrStatefulSet(tsr *tsapi.Recorder, namespace string, loginServer string) *
ImagePullPolicy: tsr.Spec.StatefulSet.Pod.Container.ImagePullPolicy, ImagePullPolicy: tsr.Spec.StatefulSet.Pod.Container.ImagePullPolicy,
Resources: tsr.Spec.StatefulSet.Pod.Container.Resources, Resources: tsr.Spec.StatefulSet.Pod.Container.Resources,
SecurityContext: tsr.Spec.StatefulSet.Pod.Container.SecurityContext, SecurityContext: tsr.Spec.StatefulSet.Pod.Container.SecurityContext,
Env: env(tsr, loginServer), Env: tsrEnv(tsr, loginServer),
EnvFrom: func() []corev1.EnvFromSource { EnvFrom: func() []corev1.EnvFromSource {
if tsr.Spec.Storage.S3 == nil || tsr.Spec.Storage.S3.Credentials.Secret.Name == "" { if tsr.Spec.Storage.S3 == nil || tsr.Spec.Storage.S3.Credentials.Secret.Name == "" {
return nil return nil
@ -95,6 +101,28 @@ func tsrStatefulSet(tsr *tsapi.Recorder, namespace string, loginServer string) *
}, },
}, },
} }
for replica := range replicas {
volumeName := fmt.Sprintf("authkey-%d", replica)
ss.Spec.Template.Spec.Containers[0].VolumeMounts = append(ss.Spec.Template.Spec.Containers[0].VolumeMounts, corev1.VolumeMount{
Name: volumeName,
ReadOnly: true,
MountPath: fmt.Sprintf("/etc/tailscaled/%s-%d", ss.Name, replica),
})
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: fmt.Sprintf("%s-auth-%d", tsr.Name, replica),
Items: []corev1.KeyToPath{{Key: "authkey", Path: "authkey"}},
},
},
})
}
return ss
} }
func tsrServiceAccount(tsr *tsapi.Recorder, namespace string) *corev1.ServiceAccount { func tsrServiceAccount(tsr *tsapi.Recorder, namespace string) *corev1.ServiceAccount {
@ -102,7 +130,7 @@ func tsrServiceAccount(tsr *tsapi.Recorder, namespace string) *corev1.ServiceAcc
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: tsrServiceAccountName(tsr), Name: tsrServiceAccountName(tsr),
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, nil), Labels: tsrLabels("recorder", tsr.Name, nil),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
Annotations: tsr.Spec.StatefulSet.Pod.ServiceAccount.Annotations, Annotations: tsr.Spec.StatefulSet.Pod.ServiceAccount.Annotations,
}, },
@ -120,11 +148,24 @@ func tsrServiceAccountName(tsr *tsapi.Recorder) string {
} }
func tsrRole(tsr *tsapi.Recorder, namespace string) *rbacv1.Role { func tsrRole(tsr *tsapi.Recorder, namespace string) *rbacv1.Role {
var replicas int32 = 1
if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
resourceNames := make([]string, 0)
for replica := range replicas {
resourceNames = append(resourceNames,
fmt.Sprintf("%s-%d", tsr.Name, replica), // State secret.
fmt.Sprintf("%s-auth-%d", tsr.Name, replica), // Auth key secret.
)
}
return &rbacv1.Role{ return &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: tsr.Name, Name: tsr.Name,
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, nil), Labels: tsrLabels("recorder", tsr.Name, nil),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
}, },
Rules: []rbacv1.PolicyRule{ Rules: []rbacv1.PolicyRule{
@ -136,10 +177,7 @@ func tsrRole(tsr *tsapi.Recorder, namespace string) *rbacv1.Role {
"patch", "patch",
"update", "update",
}, },
ResourceNames: []string{ ResourceNames: resourceNames,
tsr.Name, // Contains the auth key.
fmt.Sprintf("%s-0", tsr.Name), // Contains the node state.
},
}, },
{ {
APIGroups: []string{""}, APIGroups: []string{""},
@ -159,7 +197,7 @@ func tsrRoleBinding(tsr *tsapi.Recorder, namespace string) *rbacv1.RoleBinding {
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: tsr.Name, Name: tsr.Name,
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, nil), Labels: tsrLabels("recorder", tsr.Name, nil),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
}, },
Subjects: []rbacv1.Subject{ Subjects: []rbacv1.Subject{
@ -176,12 +214,12 @@ func tsrRoleBinding(tsr *tsapi.Recorder, namespace string) *rbacv1.RoleBinding {
} }
} }
func tsrAuthSecret(tsr *tsapi.Recorder, namespace string, authKey string) *corev1.Secret { func tsrAuthSecret(tsr *tsapi.Recorder, namespace string, authKey string, replica int32) *corev1.Secret {
return &corev1.Secret{ return &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Namespace: namespace, Namespace: namespace,
Name: tsr.Name, Name: fmt.Sprintf("%s-auth-%d", tsr.Name, replica),
Labels: labels("recorder", tsr.Name, nil), Labels: tsrLabels("recorder", tsr.Name, nil),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
}, },
StringData: map[string]string{ StringData: map[string]string{
@ -190,30 +228,19 @@ func tsrAuthSecret(tsr *tsapi.Recorder, namespace string, authKey string) *corev
} }
} }
func tsrStateSecret(tsr *tsapi.Recorder, namespace string) *corev1.Secret { func tsrStateSecret(tsr *tsapi.Recorder, namespace string, replica int32) *corev1.Secret {
return &corev1.Secret{ return &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-0", tsr.Name), Name: fmt.Sprintf("%s-%d", tsr.Name, replica),
Namespace: namespace, Namespace: namespace,
Labels: labels("recorder", tsr.Name, nil), Labels: tsrLabels("recorder", tsr.Name, nil),
OwnerReferences: tsrOwnerReference(tsr), OwnerReferences: tsrOwnerReference(tsr),
}, },
} }
} }
func env(tsr *tsapi.Recorder, loginServer string) []corev1.EnvVar { func tsrEnv(tsr *tsapi.Recorder, loginServer string) []corev1.EnvVar {
envs := []corev1.EnvVar{ envs := []corev1.EnvVar{
{
Name: "TS_AUTHKEY",
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{
Name: tsr.Name,
},
Key: "authkey",
},
},
},
{ {
Name: "POD_NAME", Name: "POD_NAME",
ValueFrom: &corev1.EnvVarSource{ ValueFrom: &corev1.EnvVarSource{
@ -231,6 +258,10 @@ func env(tsr *tsapi.Recorder, loginServer string) []corev1.EnvVar {
}, },
}, },
}, },
{
Name: "TS_AUTHKEY_FILE",
Value: "/etc/tailscaled/$(POD_NAME)/authkey",
},
{ {
Name: "TS_STATE", Name: "TS_STATE",
Value: "kube:$(POD_NAME)", Value: "kube:$(POD_NAME)",
@ -280,18 +311,18 @@ func env(tsr *tsapi.Recorder, loginServer string) []corev1.EnvVar {
return envs return envs
} }
func labels(app, instance string, customLabels map[string]string) map[string]string { func tsrLabels(app, instance string, customLabels map[string]string) map[string]string {
l := make(map[string]string, len(customLabels)+3) labels := make(map[string]string, len(customLabels)+3)
for k, v := range customLabels { for k, v := range customLabels {
l[k] = v labels[k] = v
} }
// ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ // ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
l["app.kubernetes.io/name"] = app labels["app.kubernetes.io/name"] = app
l["app.kubernetes.io/instance"] = instance labels["app.kubernetes.io/instance"] = instance
l["app.kubernetes.io/managed-by"] = "tailscale-operator" labels["app.kubernetes.io/managed-by"] = "tailscale-operator"
return l return labels
} }
func tsrOwnerReference(owner metav1.Object) []metav1.OwnerReference { func tsrOwnerReference(owner metav1.Object) []metav1.OwnerReference {

@ -12,6 +12,7 @@ import (
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/types/ptr" "tailscale.com/types/ptr"
) )
@ -23,6 +24,7 @@ func TestRecorderSpecs(t *testing.T) {
Name: "test", Name: "test",
}, },
Spec: tsapi.RecorderSpec{ Spec: tsapi.RecorderSpec{
Replicas: ptr.To[int32](3),
StatefulSet: tsapi.RecorderStatefulSet{ StatefulSet: tsapi.RecorderStatefulSet{
Labels: map[string]string{ Labels: map[string]string{
"ss-label-key": "ss-label-value", "ss-label-key": "ss-label-value",
@ -101,10 +103,10 @@ func TestRecorderSpecs(t *testing.T) {
} }
// Pod-level. // Pod-level.
if diff := cmp.Diff(ss.Labels, labels("recorder", "test", tsr.Spec.StatefulSet.Labels)); diff != "" { if diff := cmp.Diff(ss.Labels, tsrLabels("recorder", "test", tsr.Spec.StatefulSet.Labels)); diff != "" {
t.Errorf("(-got +want):\n%s", diff) t.Errorf("(-got +want):\n%s", diff)
} }
if diff := cmp.Diff(ss.Spec.Template.Labels, labels("recorder", "test", tsr.Spec.StatefulSet.Pod.Labels)); diff != "" { if diff := cmp.Diff(ss.Spec.Template.Labels, tsrLabels("recorder", "test", tsr.Spec.StatefulSet.Pod.Labels)); diff != "" {
t.Errorf("(-got +want):\n%s", diff) t.Errorf("(-got +want):\n%s", diff)
} }
if diff := cmp.Diff(ss.Spec.Template.Spec.Affinity, tsr.Spec.StatefulSet.Pod.Affinity); diff != "" { if diff := cmp.Diff(ss.Spec.Template.Spec.Affinity, tsr.Spec.StatefulSet.Pod.Affinity); diff != "" {
@ -124,7 +126,7 @@ func TestRecorderSpecs(t *testing.T) {
} }
// Container-level. // Container-level.
if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Env, env(tsr, tsLoginServer)); diff != "" { if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Env, tsrEnv(tsr, tsLoginServer)); diff != "" {
t.Errorf("(-got +want):\n%s", diff) t.Errorf("(-got +want):\n%s", diff)
} }
if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Image, tsr.Spec.StatefulSet.Pod.Container.Image); diff != "" { if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Image, tsr.Spec.StatefulSet.Pod.Container.Image); diff != "" {
@ -139,5 +141,17 @@ func TestRecorderSpecs(t *testing.T) {
if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Resources, tsr.Spec.StatefulSet.Pod.Container.Resources); diff != "" { if diff := cmp.Diff(ss.Spec.Template.Spec.Containers[0].Resources, tsr.Spec.StatefulSet.Pod.Container.Resources); diff != "" {
t.Errorf("(-got +want):\n%s", diff) t.Errorf("(-got +want):\n%s", diff)
} }
if *ss.Spec.Replicas != *tsr.Spec.Replicas {
t.Errorf("expected %d replicas, got %d", *tsr.Spec.Replicas, *ss.Spec.Replicas)
}
if len(ss.Spec.Template.Spec.Volumes) != int(*tsr.Spec.Replicas)+1 {
t.Errorf("expected %d volumes, got %d", *tsr.Spec.Replicas+1, len(ss.Spec.Template.Spec.Volumes))
}
if len(ss.Spec.Template.Spec.Containers[0].VolumeMounts) != int(*tsr.Spec.Replicas)+1 {
t.Errorf("expected %d volume mounts, got %d", *tsr.Spec.Replicas+1, len(ss.Spec.Template.Spec.Containers[0].VolumeMounts))
}
}) })
} }

@ -8,6 +8,7 @@ package main
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"strings" "strings"
"testing" "testing"
@ -20,9 +21,11 @@ import (
"k8s.io/client-go/tools/record" "k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake" "sigs.k8s.io/controller-runtime/pkg/client/fake"
tsoperator "tailscale.com/k8s-operator" tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1" tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest" "tailscale.com/tstest"
"tailscale.com/types/ptr"
) )
const ( const (
@ -36,6 +39,9 @@ func TestRecorder(t *testing.T) {
Name: "test", Name: "test",
Finalizers: []string{"tailscale.com/finalizer"}, Finalizers: []string{"tailscale.com/finalizer"},
}, },
Spec: tsapi.RecorderSpec{
Replicas: ptr.To[int32](3),
},
} }
fc := fake.NewClientBuilder(). fc := fake.NewClientBuilder().
@ -52,7 +58,7 @@ func TestRecorder(t *testing.T) {
Client: fc, Client: fc,
tsClient: tsClient, tsClient: tsClient,
recorder: fr, recorder: fr,
l: zl.Sugar(), log: zl.Sugar(),
clock: cl, clock: cl,
loginServer: tsLoginServer, loginServer: tsLoginServer,
} }
@ -80,6 +86,15 @@ func TestRecorder(t *testing.T) {
}) })
expectReconciled(t, reconciler, "", tsr.Name) expectReconciled(t, reconciler, "", tsr.Name)
expectedEvent = "Warning RecorderInvalid Recorder is invalid: must use S3 storage when using multiple replicas to ensure recordings are accessible"
expectEvents(t, fr, []string{expectedEvent})
tsr.Spec.Storage.S3 = &tsapi.S3{}
mustUpdate(t, fc, "", "test", func(t *tsapi.Recorder) {
t.Spec = tsr.Spec
})
expectReconciled(t, reconciler, "", tsr.Name)
// Only check part of this error message, because it's defined in an // Only check part of this error message, because it's defined in an
// external package and may change. // external package and may change.
if err := fc.Get(context.Background(), client.ObjectKey{ if err := fc.Get(context.Background(), client.ObjectKey{
@ -180,11 +195,14 @@ func TestRecorder(t *testing.T) {
}) })
t.Run("populate_node_info_in_state_secret_and_see_it_appear_in_status", func(t *testing.T) { t.Run("populate_node_info_in_state_secret_and_see_it_appear_in_status", func(t *testing.T) {
const key = "profile-abc"
for replica := range *tsr.Spec.Replicas {
bytes, err := json.Marshal(map[string]any{ bytes, err := json.Marshal(map[string]any{
"Config": map[string]any{ "Config": map[string]any{
"NodeID": "nodeid-123", "NodeID": fmt.Sprintf("node-%d", replica),
"UserProfile": map[string]any{ "UserProfile": map[string]any{
"LoginName": "test-0.example.ts.net", "LoginName": fmt.Sprintf("test-%d.example.ts.net", replica),
}, },
}, },
}) })
@ -192,21 +210,32 @@ func TestRecorder(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
const key = "profile-abc" name := fmt.Sprintf("%s-%d", "test", replica)
mustUpdate(t, fc, tsNamespace, "test-0", func(s *corev1.Secret) { mustUpdate(t, fc, tsNamespace, name, func(s *corev1.Secret) {
s.Data = map[string][]byte{ s.Data = map[string][]byte{
currentProfileKey: []byte(key), currentProfileKey: []byte(key),
key: bytes, key: bytes,
} }
}) })
}
expectReconciled(t, reconciler, "", tsr.Name) expectReconciled(t, reconciler, "", tsr.Name)
tsr.Status.Devices = []tsapi.RecorderTailnetDevice{ tsr.Status.Devices = []tsapi.RecorderTailnetDevice{
{ {
Hostname: "hostname-nodeid-123", Hostname: "hostname-node-0",
TailnetIPs: []string{"1.2.3.4", "::1"}, TailnetIPs: []string{"1.2.3.4", "::1"},
URL: "https://test-0.example.ts.net", URL: "https://test-0.example.ts.net",
}, },
{
Hostname: "hostname-node-1",
TailnetIPs: []string{"1.2.3.4", "::1"},
URL: "https://test-1.example.ts.net",
},
{
Hostname: "hostname-node-2",
TailnetIPs: []string{"1.2.3.4", "::1"},
URL: "https://test-2.example.ts.net",
},
} }
expectEqual(t, fc, tsr) expectEqual(t, fc, tsr)
}) })
@ -222,7 +251,7 @@ func TestRecorder(t *testing.T) {
if expected := 0; reconciler.recorders.Len() != expected { if expected := 0; reconciler.recorders.Len() != expected {
t.Fatalf("expected %d recorders, got %d", expected, reconciler.recorders.Len()) t.Fatalf("expected %d recorders, got %d", expected, reconciler.recorders.Len())
} }
if diff := cmp.Diff(tsClient.deleted, []string{"nodeid-123"}); diff != "" { if diff := cmp.Diff(tsClient.deleted, []string{"node-0", "node-1", "node-2"}); diff != "" {
t.Fatalf("unexpected deleted devices (-got +want):\n%s", diff) t.Fatalf("unexpected deleted devices (-got +want):\n%s", diff)
} }
// The fake client does not clean up objects whose owner has been // The fake client does not clean up objects whose owner has been
@ -233,26 +262,38 @@ func TestRecorder(t *testing.T) {
func expectRecorderResources(t *testing.T, fc client.WithWatch, tsr *tsapi.Recorder, shouldExist bool) { func expectRecorderResources(t *testing.T, fc client.WithWatch, tsr *tsapi.Recorder, shouldExist bool) {
t.Helper() t.Helper()
auth := tsrAuthSecret(tsr, tsNamespace, "secret-authkey") var replicas int32 = 1
state := tsrStateSecret(tsr, tsNamespace) if tsr.Spec.Replicas != nil {
replicas = *tsr.Spec.Replicas
}
role := tsrRole(tsr, tsNamespace) role := tsrRole(tsr, tsNamespace)
roleBinding := tsrRoleBinding(tsr, tsNamespace) roleBinding := tsrRoleBinding(tsr, tsNamespace)
serviceAccount := tsrServiceAccount(tsr, tsNamespace) serviceAccount := tsrServiceAccount(tsr, tsNamespace)
statefulSet := tsrStatefulSet(tsr, tsNamespace, tsLoginServer) statefulSet := tsrStatefulSet(tsr, tsNamespace, tsLoginServer)
if shouldExist { if shouldExist {
expectEqual(t, fc, auth)
expectEqual(t, fc, state)
expectEqual(t, fc, role) expectEqual(t, fc, role)
expectEqual(t, fc, roleBinding) expectEqual(t, fc, roleBinding)
expectEqual(t, fc, serviceAccount) expectEqual(t, fc, serviceAccount)
expectEqual(t, fc, statefulSet, removeResourceReqs) expectEqual(t, fc, statefulSet, removeResourceReqs)
} else { } else {
expectMissing[corev1.Secret](t, fc, auth.Namespace, auth.Name)
expectMissing[corev1.Secret](t, fc, state.Namespace, state.Name)
expectMissing[rbacv1.Role](t, fc, role.Namespace, role.Name) expectMissing[rbacv1.Role](t, fc, role.Namespace, role.Name)
expectMissing[rbacv1.RoleBinding](t, fc, roleBinding.Namespace, roleBinding.Name) expectMissing[rbacv1.RoleBinding](t, fc, roleBinding.Namespace, roleBinding.Name)
expectMissing[corev1.ServiceAccount](t, fc, serviceAccount.Namespace, serviceAccount.Name) expectMissing[corev1.ServiceAccount](t, fc, serviceAccount.Namespace, serviceAccount.Name)
expectMissing[appsv1.StatefulSet](t, fc, statefulSet.Namespace, statefulSet.Name) expectMissing[appsv1.StatefulSet](t, fc, statefulSet.Namespace, statefulSet.Name)
} }
for replica := range replicas {
auth := tsrAuthSecret(tsr, tsNamespace, "secret-authkey", replica)
state := tsrStateSecret(tsr, tsNamespace, replica)
if shouldExist {
expectEqual(t, fc, auth)
expectEqual(t, fc, state)
} else {
expectMissing[corev1.Secret](t, fc, auth.Namespace, auth.Name)
expectMissing[corev1.Secret](t, fc, state.Namespace, state.Name)
}
}
} }

@ -50,32 +50,32 @@ func NewConfigLoader(logger *zap.SugaredLogger, client clientcorev1.CoreV1Interf
} }
} }
func (l *configLoader) WatchConfig(ctx context.Context, path string) error { func (ld *configLoader) WatchConfig(ctx context.Context, path string) error {
secretNamespacedName, isKubeSecret := strings.CutPrefix(path, "kube:") secretNamespacedName, isKubeSecret := strings.CutPrefix(path, "kube:")
if isKubeSecret { if isKubeSecret {
secretNamespace, secretName, ok := strings.Cut(secretNamespacedName, string(types.Separator)) secretNamespace, secretName, ok := strings.Cut(secretNamespacedName, string(types.Separator))
if !ok { if !ok {
return fmt.Errorf("invalid Kubernetes Secret reference %q, expected format <namespace>/<name>", path) return fmt.Errorf("invalid Kubernetes Secret reference %q, expected format <namespace>/<name>", path)
} }
if err := l.watchConfigSecretChanges(ctx, secretNamespace, secretName); err != nil && !errors.Is(err, context.Canceled) { if err := ld.watchConfigSecretChanges(ctx, secretNamespace, secretName); err != nil && !errors.Is(err, context.Canceled) {
return fmt.Errorf("error watching config Secret %q: %w", secretNamespacedName, err) return fmt.Errorf("error watching config Secret %q: %w", secretNamespacedName, err)
} }
return nil return nil
} }
if err := l.watchConfigFileChanges(ctx, path); err != nil && !errors.Is(err, context.Canceled) { if err := ld.watchConfigFileChanges(ctx, path); err != nil && !errors.Is(err, context.Canceled) {
return fmt.Errorf("error watching config file %q: %w", path, err) return fmt.Errorf("error watching config file %q: %w", path, err)
} }
return nil return nil
} }
func (l *configLoader) reloadConfig(ctx context.Context, raw []byte) error { func (ld *configLoader) reloadConfig(ctx context.Context, raw []byte) error {
if bytes.Equal(raw, l.previous) { if bytes.Equal(raw, ld.previous) {
if l.cfgIgnored != nil && testenv.InTest() { if ld.cfgIgnored != nil && testenv.InTest() {
l.once.Do(func() { ld.once.Do(func() {
close(l.cfgIgnored) close(ld.cfgIgnored)
}) })
} }
return nil return nil
@ -89,14 +89,14 @@ func (l *configLoader) reloadConfig(ctx context.Context, raw []byte) error {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return ctx.Err() return ctx.Err()
case l.cfgChan <- &cfg: case ld.cfgChan <- &cfg:
} }
l.previous = raw ld.previous = raw
return nil return nil
} }
func (l *configLoader) watchConfigFileChanges(ctx context.Context, path string) error { func (ld *configLoader) watchConfigFileChanges(ctx context.Context, path string) error {
var ( var (
tickChan <-chan time.Time tickChan <-chan time.Time
eventChan <-chan fsnotify.Event eventChan <-chan fsnotify.Event
@ -106,14 +106,14 @@ func (l *configLoader) watchConfigFileChanges(ctx context.Context, path string)
if w, err := fsnotify.NewWatcher(); err != nil { if w, err := fsnotify.NewWatcher(); err != nil {
// Creating a new fsnotify watcher would fail for example if inotify was not able to create a new file descriptor. // Creating a new fsnotify watcher would fail for example if inotify was not able to create a new file descriptor.
// See https://github.com/tailscale/tailscale/issues/15081 // See https://github.com/tailscale/tailscale/issues/15081
l.logger.Infof("Failed to create fsnotify watcher on config file %q; watching for changes on 5s timer: %v", path, err) ld.logger.Infof("Failed to create fsnotify watcher on config file %q; watching for changes on 5s timer: %v", path, err)
ticker := time.NewTicker(5 * time.Second) ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop() defer ticker.Stop()
tickChan = ticker.C tickChan = ticker.C
} else { } else {
dir := filepath.Dir(path) dir := filepath.Dir(path)
file := filepath.Base(path) file := filepath.Base(path)
l.logger.Infof("Watching directory %q for changes to config file %q", dir, file) ld.logger.Infof("Watching directory %q for changes to config file %q", dir, file)
defer w.Close() defer w.Close()
if err := w.Add(dir); err != nil { if err := w.Add(dir); err != nil {
return fmt.Errorf("failed to add fsnotify watch: %w", err) return fmt.Errorf("failed to add fsnotify watch: %w", err)
@ -128,7 +128,7 @@ func (l *configLoader) watchConfigFileChanges(ctx context.Context, path string)
if err != nil { if err != nil {
return fmt.Errorf("error reading config file %q: %w", path, err) return fmt.Errorf("error reading config file %q: %w", path, err)
} }
if err := l.reloadConfig(ctx, b); err != nil { if err := ld.reloadConfig(ctx, b); err != nil {
return fmt.Errorf("error loading initial config file %q: %w", path, err) return fmt.Errorf("error loading initial config file %q: %w", path, err)
} }
@ -163,14 +163,14 @@ func (l *configLoader) watchConfigFileChanges(ctx context.Context, path string)
if len(b) == 0 { if len(b) == 0 {
continue continue
} }
if err := l.reloadConfig(ctx, b); err != nil { if err := ld.reloadConfig(ctx, b); err != nil {
return fmt.Errorf("error reloading config file %q: %v", path, err) return fmt.Errorf("error reloading config file %q: %v", path, err)
} }
} }
} }
func (l *configLoader) watchConfigSecretChanges(ctx context.Context, secretNamespace, secretName string) error { func (ld *configLoader) watchConfigSecretChanges(ctx context.Context, secretNamespace, secretName string) error {
secrets := l.client.Secrets(secretNamespace) secrets := ld.client.Secrets(secretNamespace)
w, err := secrets.Watch(ctx, metav1.ListOptions{ w, err := secrets.Watch(ctx, metav1.ListOptions{
TypeMeta: metav1.TypeMeta{ TypeMeta: metav1.TypeMeta{
Kind: "Secret", Kind: "Secret",
@ -198,11 +198,11 @@ func (l *configLoader) watchConfigSecretChanges(ctx context.Context, secretNames
return fmt.Errorf("failed to get config Secret %q: %w", secretName, err) return fmt.Errorf("failed to get config Secret %q: %w", secretName, err)
} }
if err := l.configFromSecret(ctx, secret); err != nil { if err := ld.configFromSecret(ctx, secret); err != nil {
return fmt.Errorf("error loading initial config: %w", err) return fmt.Errorf("error loading initial config: %w", err)
} }
l.logger.Infof("Watching config Secret %q for changes", secretName) ld.logger.Infof("Watching config Secret %q for changes", secretName)
for { for {
var secret *corev1.Secret var secret *corev1.Secret
select { select {
@ -237,7 +237,7 @@ func (l *configLoader) watchConfigSecretChanges(ctx context.Context, secretNames
if secret == nil || secret.Data == nil { if secret == nil || secret.Data == nil {
continue continue
} }
if err := l.configFromSecret(ctx, secret); err != nil { if err := ld.configFromSecret(ctx, secret); err != nil {
return fmt.Errorf("error reloading config Secret %q: %v", secret.Name, err) return fmt.Errorf("error reloading config Secret %q: %v", secret.Name, err)
} }
case watch.Error: case watch.Error:
@ -250,13 +250,13 @@ func (l *configLoader) watchConfigSecretChanges(ctx context.Context, secretNames
} }
} }
func (l *configLoader) configFromSecret(ctx context.Context, s *corev1.Secret) error { func (ld *configLoader) configFromSecret(ctx context.Context, s *corev1.Secret) error {
b := s.Data[kubetypes.KubeAPIServerConfigFile] b := s.Data[kubetypes.KubeAPIServerConfigFile]
if len(b) == 0 { if len(b) == 0 {
return fmt.Errorf("config Secret %q does not contain expected config in key %q", s.Name, kubetypes.KubeAPIServerConfigFile) return fmt.Errorf("config Secret %q does not contain expected config in key %q", s.Name, kubetypes.KubeAPIServerConfigFile)
} }
if err := l.reloadConfig(ctx, b); err != nil { if err := ld.reloadConfig(ctx, b); err != nil {
return err return err
} }

@ -125,15 +125,15 @@ func TestWatchConfig(t *testing.T) {
} }
} }
configChan := make(chan *conf.Config) configChan := make(chan *conf.Config)
l := NewConfigLoader(zap.Must(zap.NewDevelopment()).Sugar(), cl.CoreV1(), configChan) loader := NewConfigLoader(zap.Must(zap.NewDevelopment()).Sugar(), cl.CoreV1(), configChan)
l.cfgIgnored = make(chan struct{}) loader.cfgIgnored = make(chan struct{})
errs := make(chan error) errs := make(chan error)
ctx, cancel := context.WithCancel(t.Context()) ctx, cancel := context.WithCancel(t.Context())
defer cancel() defer cancel()
writeFile(t, tc.initialConfig) writeFile(t, tc.initialConfig)
go func() { go func() {
errs <- l.WatchConfig(ctx, cfgPath) errs <- loader.WatchConfig(ctx, cfgPath)
}() }()
for i, p := range tc.phases { for i, p := range tc.phases {
@ -159,7 +159,7 @@ func TestWatchConfig(t *testing.T) {
} else if !strings.Contains(err.Error(), p.expectedErr) { } else if !strings.Contains(err.Error(), p.expectedErr) {
t.Fatalf("expected error to contain %q, got %q", p.expectedErr, err.Error()) t.Fatalf("expected error to contain %q, got %q", p.expectedErr, err.Error())
} }
case <-l.cfgIgnored: case <-loader.cfgIgnored:
if p.expectedConf != nil { if p.expectedConf != nil {
t.Fatalf("expected config to be reloaded, but got ignored signal") t.Fatalf("expected config to be reloaded, but got ignored signal")
} }
@ -192,13 +192,13 @@ func TestWatchConfigSecret_Rewatches(t *testing.T) {
}) })
configChan := make(chan *conf.Config) configChan := make(chan *conf.Config)
l := NewConfigLoader(zap.Must(zap.NewDevelopment()).Sugar(), cl.CoreV1(), configChan) loader := NewConfigLoader(zap.Must(zap.NewDevelopment()).Sugar(), cl.CoreV1(), configChan)
mustCreateOrUpdate(t, cl, secretFrom(expected[0])) mustCreateOrUpdate(t, cl, secretFrom(expected[0]))
errs := make(chan error) errs := make(chan error)
go func() { go func() {
errs <- l.watchConfigSecretChanges(t.Context(), "default", "config-secret") errs <- loader.watchConfigSecretChanges(t.Context(), "default", "config-secret")
}() }()
for i := range 2 { for i := range 2 {
@ -212,7 +212,7 @@ func TestWatchConfigSecret_Rewatches(t *testing.T) {
} }
case err := <-errs: case err := <-errs:
t.Fatalf("unexpected error: %v", err) t.Fatalf("unexpected error: %v", err)
case <-l.cfgIgnored: case <-loader.cfgIgnored:
t.Fatalf("expected config to be reloaded, but got ignored signal") t.Fatalf("expected config to be reloaded, but got ignored signal")
case <-time.After(5 * time.Second): case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for expected event") t.Fatalf("timed out waiting for expected event")

@ -422,9 +422,9 @@ func (ipp *ConsensusIPPool) applyCheckoutAddr(nid tailcfg.NodeID, domain string,
} }
// Apply is part of the raft.FSM interface. It takes an incoming log entry and applies it to the state. // Apply is part of the raft.FSM interface. It takes an incoming log entry and applies it to the state.
func (ipp *ConsensusIPPool) Apply(l *raft.Log) any { func (ipp *ConsensusIPPool) Apply(lg *raft.Log) any {
var c tsconsensus.Command var c tsconsensus.Command
if err := json.Unmarshal(l.Data, &c); err != nil { if err := json.Unmarshal(lg.Data, &c); err != nil {
panic(fmt.Sprintf("failed to unmarshal command: %s", err.Error())) panic(fmt.Sprintf("failed to unmarshal command: %s", err.Error()))
} }
switch c.Name { switch c.Name {

@ -44,25 +44,52 @@ import (
"github.com/dsnet/try" "github.com/dsnet/try"
jsonv2 "github.com/go-json-experiment/json" jsonv2 "github.com/go-json-experiment/json"
"github.com/go-json-experiment/json/jsontext" "github.com/go-json-experiment/json/jsontext"
"tailscale.com/tailcfg"
"tailscale.com/types/bools"
"tailscale.com/types/logid" "tailscale.com/types/logid"
"tailscale.com/types/netlogtype" "tailscale.com/types/netlogtype"
"tailscale.com/util/must" "tailscale.com/util/must"
) )
var ( var (
resolveNames = flag.Bool("resolve-names", false, "convert tailscale IP addresses to hostnames; must also specify --api-key and --tailnet-id") resolveNames = flag.Bool("resolve-names", false, "This is equivalent to specifying \"--resolve-addrs=name\".")
apiKey = flag.String("api-key", "", "API key to query the Tailscale API with; see https://login.tailscale.com/admin/settings/keys") resolveAddrs = flag.String("resolve-addrs", "", "Resolve each tailscale IP address as a node ID, name, or user.\n"+
tailnetName = flag.String("tailnet-name", "", "tailnet domain name to lookup devices in; see https://login.tailscale.com/admin/settings/general") "If network flow logs do not support embedded node information,\n"+
"then --api-key and --tailnet-name must also be provided.\n"+
"Valid values include \"nodeId\", \"name\", or \"user\".")
apiKey = flag.String("api-key", "", "The API key to query the Tailscale API with.\nSee https://login.tailscale.com/admin/settings/keys")
tailnetName = flag.String("tailnet-name", "", "The Tailnet name to lookup nodes within.\nSee https://login.tailscale.com/admin/settings/general")
) )
var namesByAddr map[netip.Addr]string var (
tailnetNodesByAddr map[netip.Addr]netlogtype.Node
tailnetNodesByID map[tailcfg.StableNodeID]netlogtype.Node
)
func main() { func main() {
flag.Parse() flag.Parse()
if *resolveNames { if *resolveNames {
namesByAddr = mustMakeNamesByAddr() *resolveAddrs = "name"
}
*resolveAddrs = strings.ToLower(*resolveAddrs) // make case-insensitive
*resolveAddrs = strings.TrimSuffix(*resolveAddrs, "s") // allow plural form
*resolveAddrs = strings.ReplaceAll(*resolveAddrs, " ", "") // ignore spaces
*resolveAddrs = strings.ReplaceAll(*resolveAddrs, "-", "") // ignore dashes
*resolveAddrs = strings.ReplaceAll(*resolveAddrs, "_", "") // ignore underscores
switch *resolveAddrs {
case "":
case "id", "nodeid":
*resolveAddrs = "nodeid"
case "name", "hostname":
*resolveAddrs = "name"
case "user", "tag", "usertag", "taguser":
*resolveAddrs = "user" // tag resolution is implied
default:
log.Fatalf("--resolve-addrs must be \"nodeId\", \"name\", or \"user\"")
} }
mustLoadTailnetNodes()
// The logic handles a stream of arbitrary JSON. // The logic handles a stream of arbitrary JSON.
// So long as a JSON object seems like a network log message, // So long as a JSON object seems like a network log message,
// then this will unmarshal and print it. // then this will unmarshal and print it.
@ -103,7 +130,7 @@ func processArray(dec *jsontext.Decoder) {
func processObject(dec *jsontext.Decoder) { func processObject(dec *jsontext.Decoder) {
var hasTraffic bool var hasTraffic bool
var rawMsg []byte var rawMsg jsontext.Value
try.E1(dec.ReadToken()) // parse '{' try.E1(dec.ReadToken()) // parse '{'
for dec.PeekKind() != '}' { for dec.PeekKind() != '}' {
// Capture any members that could belong to a network log message. // Capture any members that could belong to a network log message.
@ -111,13 +138,13 @@ func processObject(dec *jsontext.Decoder) {
case "virtualTraffic", "subnetTraffic", "exitTraffic", "physicalTraffic": case "virtualTraffic", "subnetTraffic", "exitTraffic", "physicalTraffic":
hasTraffic = true hasTraffic = true
fallthrough fallthrough
case "logtail", "nodeId", "logged", "start", "end": case "logtail", "nodeId", "logged", "srcNode", "dstNodes", "start", "end":
if len(rawMsg) == 0 { if len(rawMsg) == 0 {
rawMsg = append(rawMsg, '{') rawMsg = append(rawMsg, '{')
} else { } else {
rawMsg = append(rawMsg[:len(rawMsg)-1], ',') rawMsg = append(rawMsg[:len(rawMsg)-1], ',')
} }
rawMsg = append(append(append(rawMsg, '"'), name.String()...), '"') rawMsg, _ = jsontext.AppendQuote(rawMsg, name.String())
rawMsg = append(rawMsg, ':') rawMsg = append(rawMsg, ':')
rawMsg = append(rawMsg, try.E1(dec.ReadValue())...) rawMsg = append(rawMsg, try.E1(dec.ReadValue())...)
rawMsg = append(rawMsg, '}') rawMsg = append(rawMsg, '}')
@ -145,6 +172,32 @@ type message struct {
} }
func printMessage(msg message) { func printMessage(msg message) {
var nodesByAddr map[netip.Addr]netlogtype.Node
var tailnetDNS string // e.g., ".acme-corp.ts.net"
if *resolveAddrs != "" {
nodesByAddr = make(map[netip.Addr]netlogtype.Node)
insertNode := func(node netlogtype.Node) {
for _, addr := range node.Addresses {
nodesByAddr[addr] = node
}
}
for _, node := range msg.DstNodes {
insertNode(node)
}
insertNode(msg.SrcNode)
// Derive the Tailnet DNS of the self node.
detectTailnetDNS := func(nodeName string) {
if prefix, ok := strings.CutSuffix(nodeName, ".ts.net"); ok {
if i := strings.LastIndexByte(prefix, '.'); i > 0 {
tailnetDNS = nodeName[i:]
}
}
}
detectTailnetDNS(msg.SrcNode.Name)
detectTailnetDNS(tailnetNodesByID[msg.NodeID].Name)
}
// Construct a table of network traffic per connection. // Construct a table of network traffic per connection.
rows := [][7]string{{3: "Tx[P/s]", 4: "Tx[B/s]", 5: "Rx[P/s]", 6: "Rx[B/s]"}} rows := [][7]string{{3: "Tx[P/s]", 4: "Tx[B/s]", 5: "Rx[P/s]", 6: "Rx[B/s]"}}
duration := msg.End.Sub(msg.Start) duration := msg.End.Sub(msg.Start)
@ -175,16 +228,25 @@ func printMessage(msg message) {
if !a.IsValid() { if !a.IsValid() {
return "" return ""
} }
if name, ok := namesByAddr[a.Addr()]; ok { name := a.Addr().String()
if a.Port() == 0 { node, ok := tailnetNodesByAddr[a.Addr()]
return name if !ok {
node, ok = nodesByAddr[a.Addr()]
} }
return name + ":" + strconv.Itoa(int(a.Port())) if ok {
switch *resolveAddrs {
case "nodeid":
name = cmp.Or(string(node.NodeID), name)
case "name":
name = cmp.Or(strings.TrimSuffix(string(node.Name), tailnetDNS), name)
case "user":
name = cmp.Or(bools.IfElse(len(node.Tags) > 0, fmt.Sprint(node.Tags), node.User), name)
} }
if a.Port() == 0 {
return a.Addr().String()
} }
return a.String() if a.Port() != 0 {
return name + ":" + strconv.Itoa(int(a.Port()))
}
return name
} }
for _, cc := range traffic { for _, cc := range traffic {
row := [7]string{ row := [7]string{
@ -279,8 +341,10 @@ func printMessage(msg message) {
} }
} }
func mustMakeNamesByAddr() map[netip.Addr]string { func mustLoadTailnetNodes() {
switch { switch {
case *apiKey == "" && *tailnetName == "":
return // rely on embedded node information in the logs themselves
case *apiKey == "": case *apiKey == "":
log.Fatalf("--api-key must be specified with --resolve-names") log.Fatalf("--api-key must be specified with --resolve-names")
case *tailnetName == "": case *tailnetName == "":
@ -300,57 +364,19 @@ func mustMakeNamesByAddr() map[netip.Addr]string {
// Unmarshal the API response. // Unmarshal the API response.
var m struct { var m struct {
Devices []struct { Devices []netlogtype.Node `json:"devices"`
Name string `json:"name"`
Addrs []netip.Addr `json:"addresses"`
} `json:"devices"`
} }
must.Do(json.Unmarshal(b, &m)) must.Do(json.Unmarshal(b, &m))
// Construct a unique mapping of Tailscale IP addresses to hostnames. // Construct a mapping of Tailscale IP addresses to node information.
// For brevity, we start with the first segment of the name and tailnetNodesByAddr = make(map[netip.Addr]netlogtype.Node)
// use more segments until we find the shortest prefix that is unique tailnetNodesByID = make(map[tailcfg.StableNodeID]netlogtype.Node)
// for all names in the tailnet. for _, node := range m.Devices {
seen := make(map[string]bool) for _, addr := range node.Addresses {
namesByAddr := make(map[netip.Addr]string) tailnetNodesByAddr[addr] = node
retry:
for i := range 10 {
clear(seen)
clear(namesByAddr)
for _, d := range m.Devices {
name := fieldPrefix(d.Name, i)
if seen[name] {
continue retry
}
seen[name] = true
for _, a := range d.Addrs {
namesByAddr[a] = name
}
}
return namesByAddr
}
panic("unable to produce unique mapping of address to names")
}
// fieldPrefix returns the first n number of dot-separated segments.
//
// Example:
//
// fieldPrefix("foo.bar.baz", 0) returns ""
// fieldPrefix("foo.bar.baz", 1) returns "foo"
// fieldPrefix("foo.bar.baz", 2) returns "foo.bar"
// fieldPrefix("foo.bar.baz", 3) returns "foo.bar.baz"
// fieldPrefix("foo.bar.baz", 4) returns "foo.bar.baz"
func fieldPrefix(s string, n int) string {
s0 := s
for i := 0; i < n && len(s) > 0; i++ {
if j := strings.IndexByte(s, '.'); j >= 0 {
s = s[j+1:]
} else {
s = ""
} }
tailnetNodesByID[node.NodeID] = node
} }
return strings.TrimSuffix(s0[:len(s0)-len(s)], ".")
} }
func appendRepeatByte(b []byte, c byte, n int) []byte { func appendRepeatByte(b []byte, c byte, n int) []byte {

@ -141,7 +141,7 @@ func run(ctx context.Context, ts *tsnet.Server, wgPort int, hostname string, pro
// in the netmap. // in the netmap.
// We set the NotifyInitialNetMap flag so we will always get woken with the // We set the NotifyInitialNetMap flag so we will always get woken with the
// current netmap, before only being woken on changes. // current netmap, before only being woken on changes.
bus, err := lc.WatchIPNBus(ctx, ipn.NotifyWatchEngineUpdates|ipn.NotifyInitialNetMap|ipn.NotifyNoPrivateKeys) bus, err := lc.WatchIPNBus(ctx, ipn.NotifyWatchEngineUpdates|ipn.NotifyInitialNetMap)
if err != nil { if err != nil {
log.Fatalf("watching IPN bus: %v", err) log.Fatalf("watching IPN bus: %v", err)
} }

@ -152,17 +152,17 @@ func TestSNIProxyWithNetmapConfig(t *testing.T) {
configCapKey: []tailcfg.RawMessage{tailcfg.RawMessage(b)}, configCapKey: []tailcfg.RawMessage{tailcfg.RawMessage(b)},
}) })
// Lets spin up a second node (to represent the client). // Let's spin up a second node (to represent the client).
client, _, _ := startNode(t, ctx, controlURL, "client") client, _, _ := startNode(t, ctx, controlURL, "client")
// Make sure that the sni node has received its config. // Make sure that the sni node has received its config.
l, err := sni.LocalClient() lc, err := sni.LocalClient()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
gotConfigured := false gotConfigured := false
for range 100 { for range 100 {
s, err := l.StatusWithoutPeers(ctx) s, err := lc.StatusWithoutPeers(ctx)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -176,7 +176,7 @@ func TestSNIProxyWithNetmapConfig(t *testing.T) {
t.Error("sni node never received its configuration from the coordination server!") t.Error("sni node never received its configuration from the coordination server!")
} }
// Lets make the client open a connection to the sniproxy node, and // Let's make the client open a connection to the sniproxy node, and
// make sure it results in a connection to our test listener. // make sure it results in a connection to our test listener.
w, err := client.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", ip, ln.Addr().(*net.TCPAddr).Port)) w, err := client.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", ip, ln.Addr().(*net.TCPAddr).Port))
if err != nil { if err != nil {
@ -208,10 +208,10 @@ func TestSNIProxyWithFlagConfig(t *testing.T) {
sni, _, ip := startNode(t, ctx, controlURL, "snitest") sni, _, ip := startNode(t, ctx, controlURL, "snitest")
go run(ctx, sni, 0, sni.Hostname, false, 0, "", fmt.Sprintf("tcp/%d/localhost", ln.Addr().(*net.TCPAddr).Port)) go run(ctx, sni, 0, sni.Hostname, false, 0, "", fmt.Sprintf("tcp/%d/localhost", ln.Addr().(*net.TCPAddr).Port))
// Lets spin up a second node (to represent the client). // Let's spin up a second node (to represent the client).
client, _, _ := startNode(t, ctx, controlURL, "client") client, _, _ := startNode(t, ctx, controlURL, "client")
// Lets make the client open a connection to the sniproxy node, and // Let's make the client open a connection to the sniproxy node, and
// make sure it results in a connection to our test listener. // make sure it results in a connection to our test listener.
w, err := client.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", ip, ln.Addr().(*net.TCPAddr).Port)) w, err := client.Dial(ctx, "tcp", fmt.Sprintf("%s:%d", ip, ln.Addr().(*net.TCPAddr).Port))
if err != nil { if err != nil {

@ -14,9 +14,9 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/expfmt from github.com/prometheus/client_golang/prometheus+
github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+ github.com/prometheus/common/model from github.com/prometheus/client_golang/prometheus+
LD github.com/prometheus/procfs from github.com/prometheus/client_golang/prometheus L github.com/prometheus/procfs from github.com/prometheus/client_golang/prometheus
LD github.com/prometheus/procfs/internal/fs from github.com/prometheus/procfs L github.com/prometheus/procfs/internal/fs from github.com/prometheus/procfs
LD github.com/prometheus/procfs/internal/util from github.com/prometheus/procfs L github.com/prometheus/procfs/internal/util from github.com/prometheus/procfs
💣 go4.org/mem from tailscale.com/metrics+ 💣 go4.org/mem from tailscale.com/metrics+
go4.org/netipx from tailscale.com/net/tsaddr go4.org/netipx from tailscale.com/net/tsaddr
google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt
@ -47,7 +47,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+ google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+ google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+
google.golang.org/protobuf/runtime/protoimpl from github.com/prometheus/client_model/go+ google.golang.org/protobuf/runtime/protoimpl from github.com/prometheus/client_model/go+
google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+ 💣 google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
tailscale.com from tailscale.com/version tailscale.com from tailscale.com/version
tailscale.com/envknob from tailscale.com/tsweb+ tailscale.com/envknob from tailscale.com/tsweb+
tailscale.com/feature from tailscale.com/tsweb tailscale.com/feature from tailscale.com/tsweb
@ -82,8 +82,9 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
tailscale.com/util/mak from tailscale.com/syncs+ tailscale.com/util/mak from tailscale.com/syncs+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/rands from tailscale.com/tsweb tailscale.com/util/rands from tailscale.com/tsweb
tailscale.com/util/set from tailscale.com/types/key
tailscale.com/util/slicesx from tailscale.com/tailcfg tailscale.com/util/slicesx from tailscale.com/tailcfg
tailscale.com/util/testenv from tailscale.com/types/logger tailscale.com/util/testenv from tailscale.com/types/logger+
tailscale.com/util/vizerror from tailscale.com/tailcfg+ tailscale.com/util/vizerror from tailscale.com/tailcfg+
tailscale.com/version from tailscale.com/envknob+ tailscale.com/version from tailscale.com/envknob+
tailscale.com/version/distro from tailscale.com/envknob tailscale.com/version/distro from tailscale.com/envknob
@ -94,7 +95,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
golang.org/x/crypto/nacl/box from tailscale.com/types/key golang.org/x/crypto/nacl/box from tailscale.com/types/key
golang.org/x/crypto/nacl/secretbox from golang.org/x/crypto/nacl/box golang.org/x/crypto/nacl/secretbox from golang.org/x/crypto/nacl/box
golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+ golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+
golang.org/x/exp/constraints from tailscale.com/tsweb/varz golang.org/x/exp/constraints from tailscale.com/tsweb/varz+
golang.org/x/sys/cpu from golang.org/x/crypto/blake2b+ golang.org/x/sys/cpu from golang.org/x/crypto/blake2b+
LD golang.org/x/sys/unix from github.com/prometheus/procfs+ LD golang.org/x/sys/unix from github.com/prometheus/procfs+
W golang.org/x/sys/windows from github.com/prometheus/client_golang/prometheus W golang.org/x/sys/windows from github.com/prometheus/client_golang/prometheus

@ -135,18 +135,18 @@ type lportsPool struct {
ports []int ports []int
} }
func (l *lportsPool) get() int { func (pl *lportsPool) get() int {
l.Lock() pl.Lock()
defer l.Unlock() defer pl.Unlock()
ret := l.ports[0] ret := pl.ports[0]
l.ports = append(l.ports[:0], l.ports[1:]...) pl.ports = append(pl.ports[:0], pl.ports[1:]...)
return ret return ret
} }
func (l *lportsPool) put(i int) { func (pl *lportsPool) put(i int) {
l.Lock() pl.Lock()
defer l.Unlock() defer pl.Unlock()
l.ports = append(l.ports, int(i)) pl.ports = append(pl.ports, int(i))
} }
var ( var (
@ -173,19 +173,19 @@ func init() {
// measure dial time. // measure dial time.
type lportForTCPConn int type lportForTCPConn int
func (l *lportForTCPConn) Close() error { func (lp *lportForTCPConn) Close() error {
if *l == 0 { if *lp == 0 {
return nil return nil
} }
lports.put(int(*l)) lports.put(int(*lp))
return nil return nil
} }
func (l *lportForTCPConn) Write([]byte) (int, error) { func (lp *lportForTCPConn) Write([]byte) (int, error) {
return 0, errors.New("unimplemented") return 0, errors.New("unimplemented")
} }
func (l *lportForTCPConn) Read([]byte) (int, error) { func (lp *lportForTCPConn) Read([]byte) (int, error) {
return 0, errors.New("unimplemented") return 0, errors.New("unimplemented")
} }

@ -65,9 +65,9 @@ func main() {
} }
add, remove := diffTags(stags, dtags) add, remove := diffTags(stags, dtags)
if l := len(add); l > 0 { if ln := len(add); ln > 0 {
log.Printf("%d tags to push: %s", len(add), strings.Join(add, ", ")) log.Printf("%d tags to push: %s", len(add), strings.Join(add, ", "))
if *max > 0 && l > *max { if *max > 0 && ln > *max {
log.Printf("Limiting sync to %d tags", *max) log.Printf("Limiting sync to %d tags", *max)
add = add[:*max] add = add[:*max]
} }

@ -174,6 +174,7 @@ func TestCheckForAccidentalSettingReverts(t *testing.T) {
curUser string // os.Getenv("USER") on the client side curUser string // os.Getenv("USER") on the client side
goos string // empty means "linux" goos string // empty means "linux"
distro distro.Distro distro distro.Distro
backendState string // empty means "Running"
want string want string
}{ }{
@ -188,6 +189,28 @@ func TestCheckForAccidentalSettingReverts(t *testing.T) {
}, },
want: "", want: "",
}, },
{
name: "bare_up_needs_login_default_prefs",
flags: []string{},
curPrefs: ipn.NewPrefs(),
backendState: ipn.NeedsLogin.String(),
want: "",
},
{
name: "bare_up_needs_login_losing_prefs",
flags: []string{},
curPrefs: &ipn.Prefs{
// defaults:
ControlURL: ipn.DefaultControlURL,
WantRunning: false,
NetfilterMode: preftype.NetfilterOn,
NoStatefulFiltering: opt.NewBool(true),
// non-default:
CorpDNS: false,
},
backendState: ipn.NeedsLogin.String(),
want: accidentalUpPrefix + " --accept-dns=false",
},
{ {
name: "losing_hostname", name: "losing_hostname",
flags: []string{"--accept-dns"}, flags: []string{"--accept-dns"},
@ -620,9 +643,13 @@ func TestCheckForAccidentalSettingReverts(t *testing.T) {
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
goos := "linux" goos := stdcmp.Or(tt.goos, "linux")
if tt.goos != "" { backendState := stdcmp.Or(tt.backendState, ipn.Running.String())
goos = tt.goos // Needs to match the other conditions in checkForAccidentalSettingReverts
tt.curPrefs.Persist = &persist.Persist{
UserProfile: tailcfg.UserProfile{
LoginName: "janet",
},
} }
var upArgs upArgsT var upArgs upArgsT
flagSet := newUpFlagSet(goos, &upArgs, "up") flagSet := newUpFlagSet(goos, &upArgs, "up")
@ -638,10 +665,11 @@ func TestCheckForAccidentalSettingReverts(t *testing.T) {
curExitNodeIP: tt.curExitNodeIP, curExitNodeIP: tt.curExitNodeIP,
distro: tt.distro, distro: tt.distro,
user: tt.curUser, user: tt.curUser,
backendState: backendState,
} }
applyImplicitPrefs(newPrefs, tt.curPrefs, upEnv) applyImplicitPrefs(newPrefs, tt.curPrefs, upEnv)
var got string var got string
if err := checkForAccidentalSettingReverts(newPrefs, tt.curPrefs, upEnv); err != nil { if _, err := checkForAccidentalSettingReverts(newPrefs, tt.curPrefs, upEnv); err != nil {
got = err.Error() got = err.Error()
} }
if strings.TrimSpace(got) != tt.want { if strings.TrimSpace(got) != tt.want {
@ -1013,11 +1041,8 @@ func TestUpdatePrefs(t *testing.T) {
{ {
name: "bare_up_means_up", name: "bare_up_means_up",
flags: []string{}, flags: []string{},
curPrefs: &ipn.Prefs{ curPrefs: ipn.NewPrefs(),
ControlURL: ipn.DefaultControlURL, wantSimpleUp: false, // user profile not set, so no simple up
WantRunning: false,
Hostname: "foo",
},
}, },
{ {
name: "just_up", name: "just_up",
@ -1031,6 +1056,32 @@ func TestUpdatePrefs(t *testing.T) {
}, },
wantSimpleUp: true, wantSimpleUp: true,
}, },
{
name: "just_up_needs_login_default_prefs",
flags: []string{},
curPrefs: ipn.NewPrefs(),
env: upCheckEnv{
backendState: "NeedsLogin",
},
wantSimpleUp: false,
},
{
name: "just_up_needs_login_losing_prefs",
flags: []string{},
curPrefs: &ipn.Prefs{
// defaults:
ControlURL: ipn.DefaultControlURL,
WantRunning: false,
NetfilterMode: preftype.NetfilterOn,
// non-default:
CorpDNS: false,
},
env: upCheckEnv{
backendState: "NeedsLogin",
},
wantSimpleUp: false,
wantErrSubtr: "tailscale up --accept-dns=false",
},
{ {
name: "just_edit", name: "just_edit",
flags: []string{}, flags: []string{},

@ -48,9 +48,12 @@ func runConfigureJetKVM(ctx context.Context, args []string) error {
if runtime.GOOS != "linux" || distro.Get() != distro.JetKVM { if runtime.GOOS != "linux" || distro.Get() != distro.JetKVM {
return errors.New("only implemented on JetKVM") return errors.New("only implemented on JetKVM")
} }
err := os.WriteFile("/etc/init.d/S22tailscale", bytes.TrimLeft([]byte(` if err := os.MkdirAll("/userdata/init.d", 0755); err != nil {
return errors.New("unable to create /userdata/init.d")
}
err := os.WriteFile("/userdata/init.d/S22tailscale", bytes.TrimLeft([]byte(`
#!/bin/sh #!/bin/sh
# /etc/init.d/S22tailscale # /userdata/init.d/S22tailscale
# Start/stop tailscaled # Start/stop tailscaled
case "$1" in case "$1" in

@ -182,6 +182,12 @@ func debugCmd() *ffcli.Command {
Exec: localAPIAction("rebind"), Exec: localAPIAction("rebind"),
ShortHelp: "Force a magicsock rebind", ShortHelp: "Force a magicsock rebind",
}, },
{
Name: "rotate-disco-key",
ShortUsage: "tailscale debug rotate-disco-key",
Exec: localAPIAction("rotate-disco-key"),
ShortHelp: "Rotate the discovery key",
},
{ {
Name: "derp-set-on-demand", Name: "derp-set-on-demand",
ShortUsage: "tailscale debug derp-set-on-demand", ShortUsage: "tailscale debug derp-set-on-demand",
@ -257,8 +263,7 @@ func debugCmd() *ffcli.Command {
fs := newFlagSet("watch-ipn") fs := newFlagSet("watch-ipn")
fs.BoolVar(&watchIPNArgs.netmap, "netmap", true, "include netmap in messages") fs.BoolVar(&watchIPNArgs.netmap, "netmap", true, "include netmap in messages")
fs.BoolVar(&watchIPNArgs.initial, "initial", false, "include initial status") fs.BoolVar(&watchIPNArgs.initial, "initial", false, "include initial status")
fs.BoolVar(&watchIPNArgs.rateLimit, "rate-limit", true, "rate limit messags") fs.BoolVar(&watchIPNArgs.rateLimit, "rate-limit", true, "rate limit messages")
fs.BoolVar(&watchIPNArgs.showPrivateKey, "show-private-key", false, "include node private key in printed netmap")
fs.IntVar(&watchIPNArgs.count, "count", 0, "exit after printing this many statuses, or 0 to keep going forever") fs.IntVar(&watchIPNArgs.count, "count", 0, "exit after printing this many statuses, or 0 to keep going forever")
return fs return fs
})(), })(),
@ -270,7 +275,6 @@ func debugCmd() *ffcli.Command {
ShortHelp: "Print the current network map", ShortHelp: "Print the current network map",
FlagSet: (func() *flag.FlagSet { FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("netmap") fs := newFlagSet("netmap")
fs.BoolVar(&netmapArgs.showPrivateKey, "show-private-key", false, "include node private key in printed netmap")
return fs return fs
})(), })(),
}, },
@ -616,7 +620,6 @@ func runPrefs(ctx context.Context, args []string) error {
var watchIPNArgs struct { var watchIPNArgs struct {
netmap bool netmap bool
initial bool initial bool
showPrivateKey bool
rateLimit bool rateLimit bool
count int count int
} }
@ -626,9 +629,6 @@ func runWatchIPN(ctx context.Context, args []string) error {
if watchIPNArgs.initial { if watchIPNArgs.initial {
mask = ipn.NotifyInitialState | ipn.NotifyInitialPrefs | ipn.NotifyInitialNetMap mask = ipn.NotifyInitialState | ipn.NotifyInitialPrefs | ipn.NotifyInitialNetMap
} }
if !watchIPNArgs.showPrivateKey {
mask |= ipn.NotifyNoPrivateKeys
}
if watchIPNArgs.rateLimit { if watchIPNArgs.rateLimit {
mask |= ipn.NotifyRateLimit mask |= ipn.NotifyRateLimit
} }
@ -652,18 +652,11 @@ func runWatchIPN(ctx context.Context, args []string) error {
return nil return nil
} }
var netmapArgs struct {
showPrivateKey bool
}
func runNetmap(ctx context.Context, args []string) error { func runNetmap(ctx context.Context, args []string) error {
ctx, cancel := context.WithTimeout(ctx, 5*time.Second) ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel() defer cancel()
var mask ipn.NotifyWatchOpt = ipn.NotifyInitialNetMap var mask ipn.NotifyWatchOpt = ipn.NotifyInitialNetMap
if !netmapArgs.showPrivateKey {
mask |= ipn.NotifyNoPrivateKeys
}
watcher, err := localClient.WatchIPNBus(ctx, mask) watcher, err := localClient.WatchIPNBus(ctx, mask)
if err != nil { if err != nil {
return err return err

@ -0,0 +1,84 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Package jsonoutput provides stable and versioned JSON serialisation for CLI output.
// This allows us to provide stable output to scripts/clients, but also make
// breaking changes to the output when it's useful.
//
// Historically we only used `--json` as a boolean flag, so changing the output
// could break scripts that rely on the existing format.
//
// This package allows callers to pass a version number to `--json` and get
// a consistent output. We'll bump the version when we make a breaking change
// that's likely to break scripts that rely on the existing output, e.g. if
// we remove a field or change the type/format.
//
// Passing just the boolean flag `--json` will always return v1, to preserve
// compatibility with scripts written before we versioned our output.
package jsonoutput
import (
"errors"
"fmt"
"strconv"
)
// JSONSchemaVersion implements flag.Value, and tracks whether the CLI has
// been called with `--json`, and if so, with what value.
type JSONSchemaVersion struct {
// IsSet tracks if the flag was provided at all.
IsSet bool
// Value tracks the desired schema version, which defaults to 1 if
// the user passes `--json` without an argument.
Value int
}
// String returns the default value which is printed in the CLI help text.
func (v *JSONSchemaVersion) String() string {
if v.IsSet {
return strconv.Itoa(v.Value)
} else {
return "(not set)"
}
}
// Set is called when the user passes the flag as a command-line argument.
func (v *JSONSchemaVersion) Set(s string) error {
if v.IsSet {
return errors.New("received multiple instances of --json; only pass it once")
}
v.IsSet = true
// If the user doesn't supply a schema version, default to 1.
// This ensures that any existing scripts will continue to get their
// current output.
if s == "true" {
v.Value = 1
return nil
}
version, err := strconv.Atoi(s)
if err != nil {
return fmt.Errorf("invalid integer value passed to --json: %q", s)
}
v.Value = version
return nil
}
// IsBoolFlag tells the flag package that JSONSchemaVersion can be set
// without an argument.
func (v *JSONSchemaVersion) IsBoolFlag() bool {
return true
}
// ResponseEnvelope is a set of fields common to all versioned JSON output.
type ResponseEnvelope struct {
// SchemaVersion is the version of the JSON output, e.g. "1", "2", "3"
SchemaVersion string
// ResponseWarning tells a user if a newer version of the JSON output
// is available.
ResponseWarning string `json:"_WARNING,omitzero"`
}

@ -0,0 +1,203 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package jsonoutput
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tka"
)
// PrintNetworkLockJSONV1 prints the stored TKA state as a JSON object to the CLI,
// in a stable "v1" format.
//
// This format includes:
//
// - the AUM hash as a base32-encoded string
// - the raw AUM as base64-encoded bytes
// - the expanded AUM, which prints named fields for consumption by other tools
func PrintNetworkLockJSONV1(out io.Writer, updates []ipnstate.NetworkLockUpdate) error {
messages := make([]logMessageV1, len(updates))
for i, update := range updates {
var aum tka.AUM
if err := aum.Unserialize(update.Raw); err != nil {
return fmt.Errorf("decoding: %w", err)
}
h := aum.Hash()
if !bytes.Equal(h[:], update.Hash[:]) {
return fmt.Errorf("incorrect AUM hash: got %v, want %v", h, update)
}
messages[i] = toLogMessageV1(aum, update)
}
result := struct {
ResponseEnvelope
Messages []logMessageV1
}{
ResponseEnvelope: ResponseEnvelope{
SchemaVersion: "1",
},
Messages: messages,
}
enc := json.NewEncoder(out)
enc.SetIndent("", " ")
return enc.Encode(result)
}
// toLogMessageV1 converts a [tka.AUM] and [ipnstate.NetworkLockUpdate] to the
// JSON output returned by the CLI.
func toLogMessageV1(aum tka.AUM, update ipnstate.NetworkLockUpdate) logMessageV1 {
expandedAUM := expandedAUMV1{}
expandedAUM.MessageKind = aum.MessageKind.String()
if len(aum.PrevAUMHash) > 0 {
expandedAUM.PrevAUMHash = aum.PrevAUMHash.String()
}
if key := aum.Key; key != nil {
expandedAUM.Key = toExpandedKeyV1(key)
}
if keyID := aum.KeyID; keyID != nil {
expandedAUM.KeyID = fmt.Sprintf("tlpub:%x", keyID)
}
if state := aum.State; state != nil {
expandedState := expandedStateV1{}
if h := state.LastAUMHash; h != nil {
expandedState.LastAUMHash = h.String()
}
for _, secret := range state.DisablementSecrets {
expandedState.DisablementSecrets = append(expandedState.DisablementSecrets, fmt.Sprintf("%x", secret))
}
for _, key := range state.Keys {
expandedState.Keys = append(expandedState.Keys, toExpandedKeyV1(&key))
}
expandedState.StateID1 = state.StateID1
expandedState.StateID2 = state.StateID2
expandedAUM.State = expandedState
}
if votes := aum.Votes; votes != nil {
expandedAUM.Votes = *votes
}
expandedAUM.Meta = aum.Meta
for _, signature := range aum.Signatures {
expandedAUM.Signatures = append(expandedAUM.Signatures, expandedSignatureV1{
KeyID: fmt.Sprintf("tlpub:%x", signature.KeyID),
Signature: base64.URLEncoding.EncodeToString(signature.Signature),
})
}
return logMessageV1{
Hash: aum.Hash().String(),
AUM: expandedAUM,
Raw: base64.URLEncoding.EncodeToString(update.Raw),
}
}
// toExpandedKeyV1 converts a [tka.Key] to the JSON output returned
// by the CLI.
func toExpandedKeyV1(key *tka.Key) expandedKeyV1 {
return expandedKeyV1{
Kind: key.Kind.String(),
Votes: key.Votes,
Public: fmt.Sprintf("tlpub:%x", key.Public),
Meta: key.Meta,
}
}
// logMessageV1 is the JSON representation of an AUM as both raw bytes and
// in its expanded form, and the CLI output is a list of these entries.
type logMessageV1 struct {
// The BLAKE2s digest of the CBOR-encoded AUM. This is printed as a
// base32-encoded string, e.g. KCE…XZQ
Hash string
// The expanded form of the AUM, which presents the fields in a more
// accessible format than doing a CBOR decoding.
AUM expandedAUMV1
// The raw bytes of the CBOR-encoded AUM, encoded as base64.
// This is useful for verifying the AUM hash.
Raw string
}
// expandedAUMV1 is the expanded version of a [tka.AUM], designed so external tools
// can read the AUM without knowing our CBOR definitions.
type expandedAUMV1 struct {
MessageKind string
PrevAUMHash string `json:"PrevAUMHash,omitzero"`
// Key encodes a public key to be added to the key authority.
// This field is used for AddKey AUMs.
Key expandedKeyV1 `json:"Key,omitzero"`
// KeyID references a public key which is part of the key authority.
// This field is used for RemoveKey and UpdateKey AUMs.
KeyID string `json:"KeyID,omitzero"`
// State describes the full state of the key authority.
// This field is used for Checkpoint AUMs.
State expandedStateV1 `json:"State,omitzero"`
// Votes and Meta describe properties of a key in the key authority.
// These fields are used for UpdateKey AUMs.
Votes uint `json:"Votes,omitzero"`
Meta map[string]string `json:"Meta,omitzero"`
// Signatures lists the signatures over this AUM.
Signatures []expandedSignatureV1 `json:"Signatures,omitzero"`
}
// expandedAUMV1 is the expanded version of a [tka.Key], which describes
// the public components of a key known to network-lock.
type expandedKeyV1 struct {
Kind string
// Votes describes the weight applied to signatures using this key.
Votes uint
// Public encodes the public key of the key as a hex string.
Public string
// Meta describes arbitrary metadata about the key. This could be
// used to store the name of the key, for instance.
Meta map[string]string `json:"Meta,omitzero"`
}
// expandedStateV1 is the expanded version of a [tka.State], which describes
// Tailnet Key Authority state at an instant in time.
type expandedStateV1 struct {
// LastAUMHash is the blake2s digest of the last-applied AUM.
LastAUMHash string `json:"LastAUMHash,omitzero"`
// DisablementSecrets are KDF-derived values which can be used
// to turn off the TKA in the event of a consensus-breaking bug.
DisablementSecrets []string
// Keys are the public keys of either:
//
// 1. The signing nodes currently trusted by the TKA.
// 2. Ephemeral keys that were used to generate pre-signed auth keys.
Keys []expandedKeyV1
// StateID's are nonce's, generated on enablement and fixed for
// the lifetime of the Tailnet Key Authority.
StateID1 uint64
StateID2 uint64
}
// expandedSignatureV1 is the expanded form of a [tka.Signature], which
// describes a signature over an AUM. This signature can be verified
// using the key referenced by KeyID.
type expandedSignatureV1 struct {
KeyID string
Signature string
}

@ -180,7 +180,11 @@ func printReport(dm *tailcfg.DERPMap, report *netcheck.Report) error {
printf("\t* Nearest DERP: unknown (no response to latency probes)\n") printf("\t* Nearest DERP: unknown (no response to latency probes)\n")
} else { } else {
if report.PreferredDERP != 0 { if report.PreferredDERP != 0 {
printf("\t* Nearest DERP: %v\n", dm.Regions[report.PreferredDERP].RegionName) if region, ok := dm.Regions[report.PreferredDERP]; ok {
printf("\t* Nearest DERP: %v\n", region.RegionName)
} else {
printf("\t* Nearest DERP: %v (region not found in map)\n", report.PreferredDERP)
}
} else { } else {
printf("\t* Nearest DERP: [none]\n") printf("\t* Nearest DERP: [none]\n")
} }

@ -10,10 +10,11 @@ import (
"context" "context"
"crypto/rand" "crypto/rand"
"encoding/hex" "encoding/hex"
"encoding/json" jsonv1 "encoding/json"
"errors" "errors"
"flag" "flag"
"fmt" "fmt"
"io"
"os" "os"
"strconv" "strconv"
"strings" "strings"
@ -21,6 +22,7 @@ import (
"github.com/mattn/go-isatty" "github.com/mattn/go-isatty"
"github.com/peterbourgon/ff/v3/ffcli" "github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/cmd/tailscale/cli/jsonoutput"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
"tailscale.com/tka" "tailscale.com/tka"
"tailscale.com/tsconst" "tailscale.com/tsconst"
@ -219,7 +221,7 @@ func runNetworkLockStatus(ctx context.Context, args []string) error {
} }
if nlStatusArgs.json { if nlStatusArgs.json {
enc := json.NewEncoder(os.Stdout) enc := jsonv1.NewEncoder(os.Stdout)
enc.SetIndent("", " ") enc.SetIndent("", " ")
return enc.Encode(st) return enc.Encode(st)
} }
@ -600,7 +602,7 @@ func runNetworkLockDisablementKDF(ctx context.Context, args []string) error {
var nlLogArgs struct { var nlLogArgs struct {
limit int limit int
json bool json jsonoutput.JSONSchemaVersion
} }
var nlLogCmd = &ffcli.Command{ var nlLogCmd = &ffcli.Command{
@ -612,7 +614,7 @@ var nlLogCmd = &ffcli.Command{
FlagSet: (func() *flag.FlagSet { FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("lock log") fs := newFlagSet("lock log")
fs.IntVar(&nlLogArgs.limit, "limit", 50, "max number of updates to list") fs.IntVar(&nlLogArgs.limit, "limit", 50, "max number of updates to list")
fs.BoolVar(&nlLogArgs.json, "json", false, "output in JSON format (WARNING: format subject to change)") fs.Var(&nlLogArgs.json, "json", "output in JSON format")
return fs return fs
})(), })(),
} }
@ -678,7 +680,7 @@ func nlDescribeUpdate(update ipnstate.NetworkLockUpdate, color bool) (string, er
default: default:
// Print a JSON encoding of the AUM as a fallback. // Print a JSON encoding of the AUM as a fallback.
e := json.NewEncoder(&stanza) e := jsonv1.NewEncoder(&stanza)
e.SetIndent("", "\t") e.SetIndent("", "\t")
if err := e.Encode(aum); err != nil { if err := e.Encode(aum); err != nil {
return "", err return "", err
@ -702,14 +704,21 @@ func runNetworkLockLog(ctx context.Context, args []string) error {
if err != nil { if err != nil {
return fixTailscaledConnectError(err) return fixTailscaledConnectError(err)
} }
if nlLogArgs.json {
enc := json.NewEncoder(Stdout)
enc.SetIndent("", " ")
return enc.Encode(updates)
}
out, useColor := colorableOutput() out, useColor := colorableOutput()
return printNetworkLockLog(updates, out, nlLogArgs.json, useColor)
}
func printNetworkLockLog(updates []ipnstate.NetworkLockUpdate, out io.Writer, jsonSchema jsonoutput.JSONSchemaVersion, useColor bool) error {
if jsonSchema.IsSet {
if jsonSchema.Value == 1 {
return jsonoutput.PrintNetworkLockJSONV1(out, updates)
} else {
return fmt.Errorf("unrecognised version: %q", jsonSchema.Value)
}
}
for _, update := range updates { for _, update := range updates {
stanza, err := nlDescribeUpdate(update, useColor) stanza, err := nlDescribeUpdate(update, useColor)
if err != nil { if err != nil {

@ -0,0 +1,204 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package cli
import (
"bytes"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/cmd/tailscale/cli/jsonoutput"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tka"
"tailscale.com/types/tkatype"
)
func TestNetworkLockLogOutput(t *testing.T) {
votes := uint(1)
aum1 := tka.AUM{
MessageKind: tka.AUMAddKey,
Key: &tka.Key{
Kind: tka.Key25519,
Votes: 1,
Public: []byte{2, 2},
},
}
h1 := aum1.Hash()
aum2 := tka.AUM{
MessageKind: tka.AUMRemoveKey,
KeyID: []byte{3, 3},
PrevAUMHash: h1[:],
Signatures: []tkatype.Signature{
{
KeyID: []byte{3, 4},
Signature: []byte{4, 5},
},
},
Meta: map[string]string{"en": "three", "de": "drei", "es": "tres"},
}
h2 := aum2.Hash()
aum3 := tka.AUM{
MessageKind: tka.AUMCheckpoint,
PrevAUMHash: h2[:],
State: &tka.State{
Keys: []tka.Key{
{
Kind: tka.Key25519,
Votes: 1,
Public: []byte{1, 1},
Meta: map[string]string{"en": "one", "de": "eins", "es": "uno"},
},
},
DisablementSecrets: [][]byte{
{1, 2, 3},
{4, 5, 6},
{7, 8, 9},
},
},
Votes: &votes,
}
updates := []ipnstate.NetworkLockUpdate{
{
Hash: aum3.Hash(),
Change: aum3.MessageKind.String(),
Raw: aum3.Serialize(),
},
{
Hash: aum2.Hash(),
Change: aum2.MessageKind.String(),
Raw: aum2.Serialize(),
},
{
Hash: aum1.Hash(),
Change: aum1.MessageKind.String(),
Raw: aum1.Serialize(),
},
}
t.Run("human-readable", func(t *testing.T) {
t.Parallel()
var outBuf bytes.Buffer
json := jsonoutput.JSONSchemaVersion{}
useColor := false
printNetworkLockLog(updates, &outBuf, json, useColor)
t.Logf("%s", outBuf.String())
want := `update 4M4Q3IXBARPQMFVXHJBDCYQMWU5H5FBKD7MFF75HE4O5JMIWR2UA (checkpoint)
Disablement values:
- 010203
- 040506
- 070809
Keys:
Type: 25519
KeyID: tlpub:0101
Metadata: map[de:eins en:one es:uno]
update BKVVXHOVBW7Y7YXYTLVVLMNSYG6DS5GVRVSYZLASNU3AQKA732XQ (remove-key)
KeyID: tlpub:0303
update UKJIKFHILQ62AEN7MQIFHXJ6SFVDGQCQA3OHVI3LWVPM736EMSAA (add-key)
Type: 25519
KeyID: tlpub:0202
`
if diff := cmp.Diff(outBuf.String(), want); diff != "" {
t.Fatalf("wrong output (-got, +want):\n%s", diff)
}
})
jsonV1 := `{
"SchemaVersion": "1",
"Messages": [
{
"Hash": "4M4Q3IXBARPQMFVXHJBDCYQMWU5H5FBKD7MFF75HE4O5JMIWR2UA",
"AUM": {
"MessageKind": "checkpoint",
"PrevAUMHash": "BKVVXHOVBW7Y7YXYTLVVLMNSYG6DS5GVRVSYZLASNU3AQKA732XQ",
"State": {
"DisablementSecrets": [
"010203",
"040506",
"070809"
],
"Keys": [
{
"Kind": "25519",
"Votes": 1,
"Public": "tlpub:0101",
"Meta": {
"de": "eins",
"en": "one",
"es": "uno"
}
}
],
"StateID1": 0,
"StateID2": 0
},
"Votes": 1
},
"Raw": "pAEFAlggCqtbndUNv4_i-JrrVbGywbw5dNWNZYysEm02CCgf3q8FowH2AoNDAQIDQwQFBkMHCAkDgaQBAQIBA0IBAQyjYmRlZGVpbnNiZW5jb25lYmVzY3VubwYB"
},
{
"Hash": "BKVVXHOVBW7Y7YXYTLVVLMNSYG6DS5GVRVSYZLASNU3AQKA732XQ",
"AUM": {
"MessageKind": "remove-key",
"PrevAUMHash": "UKJIKFHILQ62AEN7MQIFHXJ6SFVDGQCQA3OHVI3LWVPM736EMSAA",
"KeyID": "tlpub:0303",
"Meta": {
"de": "drei",
"en": "three",
"es": "tres"
},
"Signatures": [
{
"KeyID": "tlpub:0304",
"Signature": "BAU="
}
]
},
"Raw": "pQECAlggopKFFOhcPaARv2QQU90-kWozQFAG3Hqja7Vez-_EZIAEQgMDB6NiZGVkZHJlaWJlbmV0aHJlZWJlc2R0cmVzF4GiAUIDBAJCBAU="
},
{
"Hash": "UKJIKFHILQ62AEN7MQIFHXJ6SFVDGQCQA3OHVI3LWVPM736EMSAA",
"AUM": {
"MessageKind": "add-key",
"Key": {
"Kind": "25519",
"Votes": 1,
"Public": "tlpub:0202"
}
},
"Raw": "owEBAvYDowEBAgEDQgIC"
}
]
}
`
t.Run("json-1", func(t *testing.T) {
t.Parallel()
t.Logf("BOOM")
var outBuf bytes.Buffer
json := jsonoutput.JSONSchemaVersion{
IsSet: true,
Value: 1,
}
useColor := false
printNetworkLockLog(updates, &outBuf, json, useColor)
want := jsonV1
t.Logf("%s", outBuf.String())
if diff := cmp.Diff(outBuf.String(), want); diff != "" {
t.Fatalf("wrong output (-got, +want):\n%s", diff)
}
})
}

@ -40,7 +40,7 @@ func init() {
var serveCmd = func() *ffcli.Command { var serveCmd = func() *ffcli.Command {
se := &serveEnv{lc: &localClient} se := &serveEnv{lc: &localClient}
// previously used to serve legacy newFunnelCommand unless useWIPCode is true // previously used to serve legacy newFunnelCommand unless useWIPCode is true
// change is limited to make a revert easier and full cleanup to come after the relase. // change is limited to make a revert easier and full cleanup to come after the release.
// TODO(tylersmalley): cleanup and removal of newServeLegacyCommand as of 2023-10-16 // TODO(tylersmalley): cleanup and removal of newServeLegacyCommand as of 2023-10-16
return newServeV2Command(se, serve) return newServeV2Command(se, serve)
} }
@ -149,6 +149,7 @@ type localServeClient interface {
IncrementCounter(ctx context.Context, name string, delta int) error IncrementCounter(ctx context.Context, name string, delta int) error
GetPrefs(ctx context.Context) (*ipn.Prefs, error) GetPrefs(ctx context.Context) (*ipn.Prefs, error)
EditPrefs(ctx context.Context, mp *ipn.MaskedPrefs) (*ipn.Prefs, error) EditPrefs(ctx context.Context, mp *ipn.MaskedPrefs) (*ipn.Prefs, error)
CheckSOMarkInUse(ctx context.Context) (bool, error)
} }
// serveEnv is the environment the serve command runs within. All I/O should be // serveEnv is the environment the serve command runs within. All I/O should be
@ -168,14 +169,15 @@ type serveEnv struct {
http uint // HTTP port http uint // HTTP port
tcp uint // TCP port tcp uint // TCP port
tlsTerminatedTCP uint // a TLS terminated TCP port tlsTerminatedTCP uint // a TLS terminated TCP port
proxyProtocol uint // PROXY protocol version (1 or 2)
subcmd serveMode // subcommand subcmd serveMode // subcommand
yes bool // update without prompt yes bool // update without prompt
service tailcfg.ServiceName // service name service tailcfg.ServiceName // service name
tun bool // redirect traffic to OS for service tun bool // redirect traffic to OS for service
allServices bool // apply config file to all services allServices bool // apply config file to all services
acceptAppCaps []tailcfg.PeerCapability // app capabilities to forward
lc localServeClient // localClient interface, specific to serve lc localServeClient // localClient interface, specific to serve
// optional stuff for tests: // optional stuff for tests:
testFlagOut io.Writer testFlagOut io.Writer
testStdout io.Writer testStdout io.Writer
@ -570,7 +572,7 @@ func (e *serveEnv) handleTCPServe(ctx context.Context, srcType string, srcPort u
return fmt.Errorf("cannot serve TCP; already serving web on %d", srcPort) return fmt.Errorf("cannot serve TCP; already serving web on %d", srcPort)
} }
sc.SetTCPForwarding(srcPort, fwdAddr, terminateTLS, dnsName) sc.SetTCPForwarding(srcPort, fwdAddr, terminateTLS, 0 /* proxy proto */, dnsName)
if !reflect.DeepEqual(cursc, sc) { if !reflect.DeepEqual(cursc, sc) {
if err := e.lc.SetServeConfig(ctx, sc); err != nil { if err := e.lc.SetServeConfig(ctx, sc); err != nil {

@ -860,6 +860,8 @@ type fakeLocalServeClient struct {
setCount int // counts calls to SetServeConfig setCount int // counts calls to SetServeConfig
queryFeatureResponse *mockQueryFeatureResponse // mock response to QueryFeature calls queryFeatureResponse *mockQueryFeatureResponse // mock response to QueryFeature calls
prefs *ipn.Prefs // fake preferences, used to test GetPrefs and SetPrefs prefs *ipn.Prefs // fake preferences, used to test GetPrefs and SetPrefs
SOMarkInUse bool // fake SO mark in use status
statusWithoutPeers *ipnstate.Status // nil for fakeStatus
} }
// fakeStatus is a fake ipnstate.Status value for tests. // fakeStatus is a fake ipnstate.Status value for tests.
@ -880,8 +882,11 @@ var fakeStatus = &ipnstate.Status{
} }
func (lc *fakeLocalServeClient) StatusWithoutPeers(ctx context.Context) (*ipnstate.Status, error) { func (lc *fakeLocalServeClient) StatusWithoutPeers(ctx context.Context) (*ipnstate.Status, error) {
if lc.statusWithoutPeers == nil {
return fakeStatus, nil return fakeStatus, nil
} }
return lc.statusWithoutPeers, nil
}
func (lc *fakeLocalServeClient) GetServeConfig(ctx context.Context) (*ipn.ServeConfig, error) { func (lc *fakeLocalServeClient) GetServeConfig(ctx context.Context) (*ipn.ServeConfig, error) {
return lc.config.Clone(), nil return lc.config.Clone(), nil
@ -933,6 +938,10 @@ func (lc *fakeLocalServeClient) IncrementCounter(ctx context.Context, name strin
return nil // unused in tests return nil // unused in tests
} }
func (lc *fakeLocalServeClient) CheckSOMarkInUse(ctx context.Context) (bool, error) {
return lc.SOMarkInUse, nil
}
// exactError returns an error checker that wants exactly the provided want error. // exactError returns an error checker that wants exactly the provided want error.
// If optName is non-empty, it's used in the error message. // If optName is non-empty, it's used in the error message.
func exactErr(want error, optName ...string) func(error) string { func exactErr(want error, optName ...string) func(error) string {

@ -20,6 +20,8 @@ import (
"os/signal" "os/signal"
"path" "path"
"path/filepath" "path/filepath"
"regexp"
"runtime"
"slices" "slices"
"sort" "sort"
"strconv" "strconv"
@ -32,6 +34,7 @@ import (
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/ipproto" "tailscale.com/types/ipproto"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak" "tailscale.com/util/mak"
"tailscale.com/util/prompt" "tailscale.com/util/prompt"
"tailscale.com/util/set" "tailscale.com/util/set"
@ -96,6 +99,41 @@ func (b *bgBoolFlag) String() string {
return strconv.FormatBool(b.Value) return strconv.FormatBool(b.Value)
} }
type acceptAppCapsFlag struct {
Value *[]tailcfg.PeerCapability
}
// An application capability name has the form {domain}/{name}.
// Both parts must use the (simplified) FQDN label character set.
// The "name" can contain forward slashes.
// \pL = Unicode Letter, \pN = Unicode Number, - = Hyphen
var validAppCap = regexp.MustCompile(`^([\pL\pN-]+\.)+[\pL\pN-]+\/[\pL\pN-/]+$`)
// Set appends s to the list of appCaps to accept.
func (u *acceptAppCapsFlag) Set(s string) error {
if s == "" {
return nil
}
appCaps := strings.Split(s, ",")
for _, appCap := range appCaps {
appCap = strings.TrimSpace(appCap)
if !validAppCap.MatchString(appCap) {
return fmt.Errorf("%q does not match the form {domain}/{name}, where domain must be a fully qualified domain name", appCap)
}
*u.Value = append(*u.Value, tailcfg.PeerCapability(appCap))
}
return nil
}
// String returns the string representation of the slice of appCaps to accept.
func (u *acceptAppCapsFlag) String() string {
s := make([]string, len(*u.Value))
for i, v := range *u.Value {
s[i] = string(v)
}
return strings.Join(s, ",")
}
var serveHelpCommon = strings.TrimSpace(` var serveHelpCommon = strings.TrimSpace(`
<target> can be a file, directory, text, or most commonly the location to a service running on the <target> can be a file, directory, text, or most commonly the location to a service running on the
local machine. The location to the location service can be expressed as a port number (e.g., 3000), local machine. The location to the location service can be expressed as a port number (e.g., 3000),
@ -199,10 +237,12 @@ func newServeV2Command(e *serveEnv, subcmd serveMode) *ffcli.Command {
fs.UintVar(&e.https, "https", 0, "Expose an HTTPS server at the specified port (default mode)") fs.UintVar(&e.https, "https", 0, "Expose an HTTPS server at the specified port (default mode)")
if subcmd == serve { if subcmd == serve {
fs.UintVar(&e.http, "http", 0, "Expose an HTTP server at the specified port") fs.UintVar(&e.http, "http", 0, "Expose an HTTP server at the specified port")
fs.Var(&acceptAppCapsFlag{Value: &e.acceptAppCaps}, "accept-app-caps", "App capabilities to forward to the server (specify multiple capabilities with a comma-separated list)")
fs.Var(&serviceNameFlag{Value: &e.service}, "service", "Serve for a service with distinct virtual IP instead on node itself.")
} }
fs.UintVar(&e.tcp, "tcp", 0, "Expose a TCP forwarder to forward raw TCP packets at the specified port") fs.UintVar(&e.tcp, "tcp", 0, "Expose a TCP forwarder to forward raw TCP packets at the specified port")
fs.UintVar(&e.tlsTerminatedTCP, "tls-terminated-tcp", 0, "Expose a TCP forwarder to forward TLS-terminated TCP packets at the specified port") fs.UintVar(&e.tlsTerminatedTCP, "tls-terminated-tcp", 0, "Expose a TCP forwarder to forward TLS-terminated TCP packets at the specified port")
fs.Var(&serviceNameFlag{Value: &e.service}, "service", "Serve for a service with distinct virtual IP instead on node itself.") fs.UintVar(&e.proxyProtocol, "proxy-protocol", 0, "PROXY protocol version (1 or 2) for TCP forwarding")
fs.BoolVar(&e.yes, "yes", false, "Update without interactive prompts (default false)") fs.BoolVar(&e.yes, "yes", false, "Update without interactive prompts (default false)")
fs.BoolVar(&e.tun, "tun", false, "Forward all traffic to the local machine (default false), only supported for services. Refer to docs for more information.") fs.BoolVar(&e.tun, "tun", false, "Forward all traffic to the local machine (default false), only supported for services. Refer to docs for more information.")
}), }),
@ -255,7 +295,7 @@ func newServeV2Command(e *serveEnv, subcmd serveMode) *ffcli.Command {
Name: "get-config", Name: "get-config",
ShortUsage: fmt.Sprintf("tailscale %s get-config <file> [--service=<service>] [--all]", info.Name), ShortUsage: fmt.Sprintf("tailscale %s get-config <file> [--service=<service>] [--all]", info.Name),
ShortHelp: "Get service configuration to save to a file", ShortHelp: "Get service configuration to save to a file",
LongHelp: hidden + "Get the configuration for services that this node is currently hosting in a\n" + LongHelp: "Get the configuration for services that this node is currently hosting in a\n" +
"format that can later be provided to set-config. This can be used to declaratively set\n" + "format that can later be provided to set-config. This can be used to declaratively set\n" +
"configuration for a service host.", "configuration for a service host.",
Exec: e.runServeGetConfig, Exec: e.runServeGetConfig,
@ -268,10 +308,11 @@ func newServeV2Command(e *serveEnv, subcmd serveMode) *ffcli.Command {
Name: "set-config", Name: "set-config",
ShortUsage: fmt.Sprintf("tailscale %s set-config <file> [--service=<service>] [--all]", info.Name), ShortUsage: fmt.Sprintf("tailscale %s set-config <file> [--service=<service>] [--all]", info.Name),
ShortHelp: "Define service configuration from a file", ShortHelp: "Define service configuration from a file",
LongHelp: hidden + "Read the provided configuration file and use it to declaratively set the configuration\n" + LongHelp: "Read the provided configuration file and use it to declaratively set the configuration\n" +
"for either a single service, or for all services that this node is hosting. If --service is specified,\n" + "for either a single service, or for all services that this node is hosting. If --service is specified,\n" +
"all endpoint handlers for that service are overwritten. If --all is specified, all endpoint handlers for\n" + "all endpoint handlers for that service are overwritten. If --all is specified, all endpoint handlers for\n" +
"all services are overwritten.", "all services are overwritten.\n\n" +
"For information on the file format, see tailscale.com/kb/1589/tailscale-services-configuration-file",
Exec: e.runServeSetConfig, Exec: e.runServeSetConfig,
FlagSet: e.newFlags("serve-set-config", func(fs *flag.FlagSet) { FlagSet: e.newFlags("serve-set-config", func(fs *flag.FlagSet) {
fs.BoolVar(&e.allServices, "all", false, "apply config to all services") fs.BoolVar(&e.allServices, "all", false, "apply config to all services")
@ -375,6 +416,14 @@ func (e *serveEnv) runServeCombined(subcmd serveMode) execFunc {
return errHelpFunc(subcmd) return errHelpFunc(subcmd)
} }
if (srvType == serveTypeHTTP || srvType == serveTypeHTTPS) && e.proxyProtocol != 0 {
return fmt.Errorf("PROXY protocol is only supported for TCP forwarding, not HTTP/HTTPS")
}
// Validate PROXY protocol version
if e.proxyProtocol != 0 && e.proxyProtocol != 1 && e.proxyProtocol != 2 {
return fmt.Errorf("invalid PROXY protocol version %d; must be 1 or 2", e.proxyProtocol)
}
sc, err := e.lc.GetServeConfig(ctx) sc, err := e.lc.GetServeConfig(ctx)
if err != nil { if err != nil {
return fmt.Errorf("error getting serve config: %w", err) return fmt.Errorf("error getting serve config: %w", err)
@ -420,20 +469,19 @@ func (e *serveEnv) runServeCombined(subcmd serveMode) execFunc {
svcName = e.service svcName = e.service
dnsName = e.service.String() dnsName = e.service.String()
} }
tagged := st.Self.Tags != nil && st.Self.Tags.Len() > 0
if forService && !tagged && !turnOff {
return errors.New("service hosts must be tagged nodes")
}
if !forService && srvType == serveTypeTUN { if !forService && srvType == serveTypeTUN {
return errors.New("tun mode is only supported for services") return errors.New("tun mode is only supported for services")
} }
wantFg := !e.bg.Value && !turnOff wantFg := !e.bg.Value && !turnOff
if wantFg { if wantFg {
// validate the config before creating a WatchIPNBus session
if err := e.validateConfig(parentSC, srvPort, srvType, svcName); err != nil {
return err
}
// if foreground mode, create a WatchIPNBus session // if foreground mode, create a WatchIPNBus session
// and use the nested config for all following operations // and use the nested config for all following operations
// TODO(marwan-at-work): nested-config validations should happen here or previous to this point. // TODO(marwan-at-work): nested-config validations should happen here or previous to this point.
watcher, err = e.lc.WatchIPNBus(ctx, ipn.NotifyInitialState|ipn.NotifyNoPrivateKeys) watcher, err = e.lc.WatchIPNBus(ctx, ipn.NotifyInitialState)
if err != nil { if err != nil {
return err return err
} }
@ -455,9 +503,6 @@ func (e *serveEnv) runServeCombined(subcmd serveMode) execFunc {
// only unset serve when trying to unset with type and port flags. // only unset serve when trying to unset with type and port flags.
err = e.unsetServe(sc, dnsName, srvType, srvPort, mount, magicDNSSuffix) err = e.unsetServe(sc, dnsName, srvType, srvPort, mount, magicDNSSuffix)
} else { } else {
if err := e.validateConfig(parentSC, srvPort, srvType, svcName); err != nil {
return err
}
if forService { if forService {
e.addServiceToPrefs(ctx, svcName) e.addServiceToPrefs(ctx, svcName)
} }
@ -465,7 +510,10 @@ func (e *serveEnv) runServeCombined(subcmd serveMode) execFunc {
if len(args) > 0 { if len(args) > 0 {
target = args[0] target = args[0]
} }
err = e.setServe(sc, dnsName, srvType, srvPort, mount, target, funnel, magicDNSSuffix) if err := e.shouldWarnRemoteDestCompatibility(ctx, target); err != nil {
return err
}
err = e.setServe(sc, dnsName, srvType, srvPort, mount, target, funnel, magicDNSSuffix, e.acceptAppCaps, int(e.proxyProtocol))
msg = e.messageForPort(sc, st, dnsName, srvType, srvPort) msg = e.messageForPort(sc, st, dnsName, srvType, srvPort)
} }
if err != nil { if err != nil {
@ -786,7 +834,7 @@ func (e *serveEnv) runServeSetConfig(ctx context.Context, args []string) (err er
for name, details := range scf.Services { for name, details := range scf.Services {
for ppr, ep := range details.Endpoints { for ppr, ep := range details.Endpoints {
if ep.Protocol == conffile.ProtoTUN { if ep.Protocol == conffile.ProtoTUN {
err := e.setServe(sc, name.String(), serveTypeTUN, 0, "", "", false, magicDNSSuffix) err := e.setServe(sc, name.String(), serveTypeTUN, 0, "", "", false, magicDNSSuffix, nil, 0 /* proxy protocol */)
if err != nil { if err != nil {
return err return err
} }
@ -808,7 +856,7 @@ func (e *serveEnv) runServeSetConfig(ctx context.Context, args []string) (err er
portStr := fmt.Sprint(destPort) portStr := fmt.Sprint(destPort)
target = fmt.Sprintf("%s://%s", ep.Protocol, net.JoinHostPort(ep.Destination, portStr)) target = fmt.Sprintf("%s://%s", ep.Protocol, net.JoinHostPort(ep.Destination, portStr))
} }
err := e.setServe(sc, name.String(), serveType, port, "/", target, false, magicDNSSuffix) err := e.setServe(sc, name.String(), serveType, port, "/", target, false, magicDNSSuffix, nil, 0 /* proxy protocol */)
if err != nil { if err != nil {
return fmt.Errorf("service %q: %w", name, err) return fmt.Errorf("service %q: %w", name, err)
} }
@ -851,72 +899,12 @@ func (e *serveEnv) runServeSetConfig(ctx context.Context, args []string) (err er
return e.lc.SetServeConfig(ctx, sc) return e.lc.SetServeConfig(ctx, sc)
} }
const backgroundExistsMsg = "background configuration already exists, use `tailscale %s --%s=%d off` to remove the existing configuration" func (e *serveEnv) setServe(sc *ipn.ServeConfig, dnsName string, srvType serveType, srvPort uint16, mount string, target string, allowFunnel bool, mds string, caps []tailcfg.PeerCapability, proxyProtocol int) error {
// validateConfig checks if the serve config is valid to serve the type wanted on the port.
// dnsName is a FQDN or a serviceName (with `svc:` prefix).
func (e *serveEnv) validateConfig(sc *ipn.ServeConfig, port uint16, wantServe serveType, svcName tailcfg.ServiceName) error {
var tcpHandlerForPort *ipn.TCPPortHandler
if svcName != noService {
svc := sc.Services[svcName]
if svc == nil {
return nil
}
if wantServe == serveTypeTUN && (svc.TCP != nil || svc.Web != nil) {
return errors.New("service already has a TCP or Web handler, cannot serve in TUN mode")
}
if svc.Tun && wantServe != serveTypeTUN {
return errors.New("service is already being served in TUN mode")
}
if svc.TCP[port] == nil {
return nil
}
tcpHandlerForPort = svc.TCP[port]
} else {
sc, isFg := sc.FindConfig(port)
if sc == nil {
return nil
}
if isFg {
return errors.New("foreground already exists under this port")
}
if !e.bg.Value {
return fmt.Errorf(backgroundExistsMsg, infoMap[e.subcmd].Name, wantServe.String(), port)
}
tcpHandlerForPort = sc.TCP[port]
}
existingServe := serveFromPortHandler(tcpHandlerForPort)
if wantServe != existingServe {
target := svcName
if target == noService {
target = "machine"
}
return fmt.Errorf("want to serve %q but port is already serving %q for %q", wantServe, existingServe, target)
}
return nil
}
func serveFromPortHandler(tcp *ipn.TCPPortHandler) serveType {
switch {
case tcp.HTTP:
return serveTypeHTTP
case tcp.HTTPS:
return serveTypeHTTPS
case tcp.TerminateTLS != "":
return serveTypeTLSTerminatedTCP
case tcp.TCPForward != "":
return serveTypeTCP
default:
return -1
}
}
func (e *serveEnv) setServe(sc *ipn.ServeConfig, dnsName string, srvType serveType, srvPort uint16, mount string, target string, allowFunnel bool, mds string) error {
// update serve config based on the type // update serve config based on the type
switch srvType { switch srvType {
case serveTypeHTTPS, serveTypeHTTP: case serveTypeHTTPS, serveTypeHTTP:
useTLS := srvType == serveTypeHTTPS useTLS := srvType == serveTypeHTTPS
err := e.applyWebServe(sc, dnsName, srvPort, useTLS, mount, target, mds) err := e.applyWebServe(sc, dnsName, srvPort, useTLS, mount, target, mds, caps)
if err != nil { if err != nil {
return fmt.Errorf("failed apply web serve: %w", err) return fmt.Errorf("failed apply web serve: %w", err)
} }
@ -924,7 +912,7 @@ func (e *serveEnv) setServe(sc *ipn.ServeConfig, dnsName string, srvType serveTy
if e.setPath != "" { if e.setPath != "" {
return fmt.Errorf("cannot mount a path for TCP serve") return fmt.Errorf("cannot mount a path for TCP serve")
} }
err := e.applyTCPServe(sc, dnsName, srvType, srvPort, target) err := e.applyTCPServe(sc, dnsName, srvType, srvPort, target, proxyProtocol)
if err != nil { if err != nil {
return fmt.Errorf("failed to apply TCP serve: %w", err) return fmt.Errorf("failed to apply TCP serve: %w", err)
} }
@ -957,6 +945,7 @@ var (
msgDisableServiceProxy = "To disable the proxy, run: tailscale serve --service=%s --%s=%d off" msgDisableServiceProxy = "To disable the proxy, run: tailscale serve --service=%s --%s=%d off"
msgDisableServiceTun = "To disable the service in TUN mode, run: tailscale serve --service=%s --tun off" msgDisableServiceTun = "To disable the service in TUN mode, run: tailscale serve --service=%s --tun off"
msgDisableService = "To remove config for the service, run: tailscale serve clear %s" msgDisableService = "To remove config for the service, run: tailscale serve clear %s"
msgWarnRemoteDestCompatibility = "Warning: %s doesn't support connecting to remote destinations from non-default route, see tailscale.com/kb/1552/tailscale-services for detail."
msgToExit = "Press Ctrl+C to exit." msgToExit = "Press Ctrl+C to exit."
) )
@ -1050,6 +1039,9 @@ func (e *serveEnv) messageForPort(sc *ipn.ServeConfig, st *ipnstate.Status, dnsN
if tcpHandler.TerminateTLS != "" { if tcpHandler.TerminateTLS != "" {
tlsStatus = "TLS terminated" tlsStatus = "TLS terminated"
} }
if ver := tcpHandler.ProxyProtocol; ver != 0 {
tlsStatus = fmt.Sprintf("%s, PROXY protocol v%d", tlsStatus, ver)
}
output.WriteString(fmt.Sprintf("|-- tcp://%s:%d (%s)\n", host, srvPort, tlsStatus)) output.WriteString(fmt.Sprintf("|-- tcp://%s:%d (%s)\n", host, srvPort, tlsStatus))
for _, a := range ips { for _, a := range ips {
@ -1080,7 +1072,78 @@ func (e *serveEnv) messageForPort(sc *ipn.ServeConfig, st *ipnstate.Status, dnsN
return output.String() return output.String()
} }
func (e *serveEnv) applyWebServe(sc *ipn.ServeConfig, dnsName string, srvPort uint16, useTLS bool, mount, target string, mds string) error { // isRemote reports whether the given destination from serve config
// is a remote destination.
func isRemote(target string) bool {
// target being a port number means it's localhost
if _, err := strconv.ParseUint(target, 10, 16); err == nil {
return false
}
// prepend tmp:// if no scheme is present just to help parsing
if !strings.Contains(target, "://") {
target = "tmp://" + target
}
// make sure we can parse the target, wether it's a full URL or just a host:port
u, err := url.ParseRequestURI(target)
if err != nil {
// If we can't parse the target, it doesn't matter if it's remote or not
return false
}
validHN := dnsname.ValidHostname(u.Hostname()) == nil
validIP := net.ParseIP(u.Hostname()) != nil
if !validHN && !validIP {
return false
}
if u.Hostname() == "localhost" || u.Hostname() == "127.0.0.1" || u.Hostname() == "::1" {
return false
}
return true
}
// shouldWarnRemoteDestCompatibility reports whether we should warn the user
// that their current OS/environment may not be compatible with
// service's proxy destination.
func (e *serveEnv) shouldWarnRemoteDestCompatibility(ctx context.Context, target string) error {
// no target means nothing to check
if target == "" {
return nil
}
if filepath.IsAbs(target) || strings.HasPrefix(target, "text:") {
// local path or text target, nothing to check
return nil
}
// only check for remote destinations
if !isRemote(target) {
return nil
}
// Check if running as Mac extension and warn
if version.IsMacAppStore() || version.IsMacSysExt() {
return fmt.Errorf(msgWarnRemoteDestCompatibility, "the MacOS extension")
}
// Check for linux, if it's running with TS_FORCE_LINUX_BIND_TO_DEVICE=true
// and tailscale bypass mark is not working. If any of these conditions are true, and the dest is
// a remote destination, return true.
if runtime.GOOS == "linux" {
SOMarkInUse, err := e.lc.CheckSOMarkInUse(ctx)
if err != nil {
log.Printf("error checking SO mark in use: %v", err)
return nil
}
if !SOMarkInUse {
return fmt.Errorf(msgWarnRemoteDestCompatibility, "the Linux tailscaled without SO_MARK")
}
}
return nil
}
func (e *serveEnv) applyWebServe(sc *ipn.ServeConfig, dnsName string, srvPort uint16, useTLS bool, mount, target, mds string, caps []tailcfg.PeerCapability) error {
h := new(ipn.HTTPHandler) h := new(ipn.HTTPHandler)
switch { switch {
case strings.HasPrefix(target, "text:"): case strings.HasPrefix(target, "text:"):
@ -1114,6 +1177,7 @@ func (e *serveEnv) applyWebServe(sc *ipn.ServeConfig, dnsName string, srvPort ui
return err return err
} }
h.Proxy = t h.Proxy = t
h.AcceptAppCaps = caps
} }
// TODO: validation needs to check nested foreground configs // TODO: validation needs to check nested foreground configs
@ -1127,7 +1191,7 @@ func (e *serveEnv) applyWebServe(sc *ipn.ServeConfig, dnsName string, srvPort ui
return nil return nil
} }
func (e *serveEnv) applyTCPServe(sc *ipn.ServeConfig, dnsName string, srcType serveType, srcPort uint16, target string) error { func (e *serveEnv) applyTCPServe(sc *ipn.ServeConfig, dnsName string, srcType serveType, srcPort uint16, target string, proxyProtocol int) error {
var terminateTLS bool var terminateTLS bool
switch srcType { switch srcType {
case serveTypeTCP: case serveTypeTCP:
@ -1138,6 +1202,8 @@ func (e *serveEnv) applyTCPServe(sc *ipn.ServeConfig, dnsName string, srcType se
return fmt.Errorf("invalid TCP target %q", target) return fmt.Errorf("invalid TCP target %q", target)
} }
svcName := tailcfg.AsServiceName(dnsName)
targetURL, err := ipn.ExpandProxyTargetValue(target, []string{"tcp"}, "tcp") targetURL, err := ipn.ExpandProxyTargetValue(target, []string{"tcp"}, "tcp")
if err != nil { if err != nil {
return fmt.Errorf("unable to expand target: %v", err) return fmt.Errorf("unable to expand target: %v", err)
@ -1149,13 +1215,11 @@ func (e *serveEnv) applyTCPServe(sc *ipn.ServeConfig, dnsName string, srcType se
} }
// TODO: needs to account for multiple configs from foreground mode // TODO: needs to account for multiple configs from foreground mode
svcName := tailcfg.AsServiceName(dnsName)
if sc.IsServingWeb(srcPort, svcName) { if sc.IsServingWeb(srcPort, svcName) {
return fmt.Errorf("cannot serve TCP; already serving web on %d for %s", srcPort, dnsName) return fmt.Errorf("cannot serve TCP; already serving web on %d for %s", srcPort, dnsName)
} }
sc.SetTCPForwarding(srcPort, dstURL.Host, terminateTLS, dnsName) sc.SetTCPForwarding(srcPort, dstURL.Host, terminateTLS, proxyProtocol, dnsName)
return nil return nil
} }

@ -12,6 +12,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"regexp"
"slices" "slices"
"strconv" "strconv"
"strings" "strings"
@ -22,6 +23,7 @@ import (
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/views"
) )
func TestServeDevConfigMutations(t *testing.T) { func TestServeDevConfigMutations(t *testing.T) {
@ -33,10 +35,11 @@ func TestServeDevConfigMutations(t *testing.T) {
} }
// group is a group of steps that share the same // group is a group of steps that share the same
// config mutation, but always starts from an empty config // config mutation
type group struct { type group struct {
name string name string
steps []step steps []step
initialState fakeLocalServeClient // use the zero value for empty config
} }
// creaet a temporary directory for path-based destinations // creaet a temporary directory for path-based destinations
@ -217,10 +220,20 @@ func TestServeDevConfigMutations(t *testing.T) {
}}, }},
}, },
{ {
name: "invalid_host", name: "ip_host",
initialState: fakeLocalServeClient{
SOMarkInUse: true,
},
steps: []step{{ steps: []step{{
command: cmd("serve --https=443 --bg http://somehost:3000"), // invalid host command: cmd("serve --https=443 --bg http://192.168.1.1:3000"),
wantErr: anyErr(), want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"foo.test.ts.net:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://192.168.1.1:3000"},
}},
},
},
}}, }},
}, },
{ {
@ -230,6 +243,16 @@ func TestServeDevConfigMutations(t *testing.T) {
wantErr: anyErr(), wantErr: anyErr(),
}}, }},
}, },
{
name: "no_scheme_remote_host_tcp",
initialState: fakeLocalServeClient{
SOMarkInUse: true,
},
steps: []step{{
command: cmd("serve --https=443 --bg 192.168.1.1:3000"),
wantErr: exactErrMsg(errHelp),
}},
},
{ {
name: "turn_off_https", name: "turn_off_https",
steps: []step{ steps: []step{
@ -399,15 +422,11 @@ func TestServeDevConfigMutations(t *testing.T) {
}, },
}}, }},
}, },
{
name: "unknown_host_tcp",
steps: []step{{
command: cmd("serve --tls-terminated-tcp=443 --bg tcp://somehost:5432"),
wantErr: exactErrMsg(errHelp),
}},
},
{ {
name: "tcp_port_too_low", name: "tcp_port_too_low",
initialState: fakeLocalServeClient{
SOMarkInUse: true,
},
steps: []step{{ steps: []step{{
command: cmd("serve --tls-terminated-tcp=443 --bg tcp://somehost:0"), command: cmd("serve --tls-terminated-tcp=443 --bg tcp://somehost:0"),
wantErr: exactErrMsg(errHelp), wantErr: exactErrMsg(errHelp),
@ -415,6 +434,9 @@ func TestServeDevConfigMutations(t *testing.T) {
}, },
{ {
name: "tcp_port_too_high", name: "tcp_port_too_high",
initialState: fakeLocalServeClient{
SOMarkInUse: true,
},
steps: []step{{ steps: []step{{
command: cmd("serve --tls-terminated-tcp=443 --bg tcp://somehost:65536"), command: cmd("serve --tls-terminated-tcp=443 --bg tcp://somehost:65536"),
wantErr: exactErrMsg(errHelp), wantErr: exactErrMsg(errHelp),
@ -529,6 +551,9 @@ func TestServeDevConfigMutations(t *testing.T) {
}, },
{ {
name: "bad_path", name: "bad_path",
initialState: fakeLocalServeClient{
SOMarkInUse: true,
},
steps: []step{{ steps: []step{{
command: cmd("serve --bg --https=443 bad/path"), command: cmd("serve --bg --https=443 bad/path"),
wantErr: exactErrMsg(errHelp), wantErr: exactErrMsg(errHelp),
@ -795,36 +820,186 @@ func TestServeDevConfigMutations(t *testing.T) {
}, },
}, },
{ {
name: "forground_with_bg_conflict", name: "advertise_service",
initialState: fakeLocalServeClient{
statusWithoutPeers: &ipnstate.Status{
BackendState: ipn.Running.String(),
Self: &ipnstate.PeerStatus{
DNSName: "foo.test.ts.net",
CapMap: tailcfg.NodeCapMap{
tailcfg.NodeAttrFunnel: nil,
tailcfg.CapabilityFunnelPorts + "?ports=443,8443": nil,
},
Tags: ptrToReadOnlySlice([]string{"some-tag"}),
},
CurrentTailnet: &ipnstate.TailnetStatus{MagicDNSSuffix: "test.ts.net"},
},
SOMarkInUse: true,
},
steps: []step{{
command: cmd("serve --service=svc:foo --http=80 text:foo"),
want: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
TCP: map[uint16]*ipn.TCPPortHandler{
80: {HTTP: true},
},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"foo.test.ts.net:80": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Text: "foo"},
}},
},
},
},
},
}},
},
{
name: "advertise_service_from_untagged_node",
steps: []step{{
command: cmd("serve --service=svc:foo --http=80 text:foo"),
wantErr: anyErr(),
}},
},
{
name: "forward_grant_header",
steps: []step{ steps: []step{
{ {
command: cmd("serve --bg --http=3000 localhost:3000"), command: cmd("serve --bg --accept-app-caps=example.com/cap/foo 3000"),
want: &ipn.ServeConfig{ want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{3000: {HTTP: true}}, TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{ Web: map[ipn.HostPort]*ipn.WebServerConfig{
"foo.test.ts.net:3000": {Handlers: map[string]*ipn.HTTPHandler{ "foo.test.ts.net:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {Proxy: "http://localhost:3000"}, "/": {
Proxy: "http://127.0.0.1:3000",
AcceptAppCaps: []tailcfg.PeerCapability{"example.com/cap/foo"},
},
}},
},
},
},
{
command: cmd("serve --bg --accept-app-caps=example.com/cap/foo,example.com/cap/bar 3000"),
want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"foo.test.ts.net:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {
Proxy: "http://127.0.0.1:3000",
AcceptAppCaps: []tailcfg.PeerCapability{"example.com/cap/foo", "example.com/cap/bar"},
},
}},
},
},
},
{
command: cmd("serve --bg --accept-app-caps=example.com/cap/bar 3000"),
want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"foo.test.ts.net:443": {Handlers: map[string]*ipn.HTTPHandler{
"/": {
Proxy: "http://127.0.0.1:3000",
AcceptAppCaps: []tailcfg.PeerCapability{"example.com/cap/bar"},
},
}},
},
},
},
},
},
{
name: "invalid_accept_caps_invalid_app_cap",
steps: []step{
{
command: cmd("serve --bg --accept-app-caps=example.com/cap/fine,NOTFINE 3000"), // should be {domain.tld}/{name}
wantErr: func(err error) (badErrMsg string) {
if err == nil || !strings.Contains(err.Error(), fmt.Sprintf("%q does not match", "NOTFINE")) {
return fmt.Sprintf("wanted validation error that quotes the non-matching capability (and nothing more) but got %q", err.Error())
}
return ""
},
},
},
},
{
name: "tcp_with_proxy_protocol_v1",
steps: []step{{
command: cmd("serve --tcp=8000 --proxy-protocol=1 --bg tcp://localhost:5432"),
want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
8000: {
TCPForward: "localhost:5432",
ProxyProtocol: 1,
},
},
},
}},
},
{
name: "tls_terminated_tcp_with_proxy_protocol_v2",
steps: []step{{
command: cmd("serve --tls-terminated-tcp=443 --proxy-protocol=2 --bg tcp://localhost:5432"),
want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {
TCPForward: "localhost:5432",
TerminateTLS: "foo.test.ts.net",
ProxyProtocol: 2,
},
},
},
}}, }},
}, },
{
name: "tcp_update_to_add_proxy_protocol",
steps: []step{
{
command: cmd("serve --tcp=8000 --bg tcp://localhost:5432"),
want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
8000: {TCPForward: "localhost:5432"},
},
}, },
}, },
{ {
command: cmd("serve --http=3000 localhost:3000"), command: cmd("serve --tcp=8000 --proxy-protocol=1 --bg tcp://localhost:5432"),
wantErr: exactErrMsg(fmt.Errorf(backgroundExistsMsg, "serve", "http", 3000)), want: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
8000: {
TCPForward: "localhost:5432",
ProxyProtocol: 1,
},
}, },
}, },
}, },
},
},
{
name: "tcp_proxy_protocol_invalid_version",
steps: []step{{
command: cmd("serve --tcp=8000 --proxy-protocol=3 --bg tcp://localhost:5432"),
wantErr: anyErr(),
}},
},
{
name: "proxy_protocol_without_tcp",
steps: []step{{
command: cmd("serve --https=443 --proxy-protocol=1 --bg http://localhost:3000"),
wantErr: anyErr(),
}},
},
} }
for _, group := range groups { for _, group := range groups {
t.Run(group.name, func(t *testing.T) { t.Run(group.name, func(t *testing.T) {
lc := &fakeLocalServeClient{} lc := group.initialState
for i, st := range group.steps { for i, st := range group.steps {
var stderr bytes.Buffer var stderr bytes.Buffer
var stdout bytes.Buffer var stdout bytes.Buffer
var flagOut bytes.Buffer var flagOut bytes.Buffer
e := &serveEnv{ e := &serveEnv{
lc: lc, lc: &lc,
testFlagOut: &flagOut, testFlagOut: &flagOut,
testStdout: &stdout, testStdout: &stdout,
testStderr: &stderr, testStderr: &stderr,
@ -872,190 +1047,6 @@ func TestServeDevConfigMutations(t *testing.T) {
} }
} }
func TestValidateConfig(t *testing.T) {
tests := [...]struct {
name string
desc string
cfg *ipn.ServeConfig
svc tailcfg.ServiceName
servePort uint16
serveType serveType
bg bgBoolFlag
wantErr bool
}{
{
name: "nil_config",
desc: "when config is nil, all requests valid",
cfg: nil,
servePort: 3000,
serveType: serveTypeHTTPS,
},
{
name: "new_bg_tcp",
desc: "no error when config exists but we're adding a new bg tcp port",
cfg: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {HTTPS: true},
},
},
bg: bgBoolFlag{true, false},
servePort: 10000,
serveType: serveTypeHTTPS,
},
{
name: "override_bg_tcp",
desc: "no error when overwriting previous port under the same serve type",
cfg: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {TCPForward: "http://localhost:4545"},
},
},
bg: bgBoolFlag{true, false},
servePort: 443,
serveType: serveTypeTCP,
},
{
name: "override_bg_tcp",
desc: "error when overwriting previous port under a different serve type",
cfg: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {HTTPS: true},
},
},
bg: bgBoolFlag{true, false},
servePort: 443,
serveType: serveTypeHTTP,
wantErr: true,
},
{
name: "new_fg_port",
desc: "no error when serving a new foreground port",
cfg: &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{
443: {HTTPS: true},
},
Foreground: map[string]*ipn.ServeConfig{
"abc123": {
TCP: map[uint16]*ipn.TCPPortHandler{
3000: {HTTPS: true},
},
},
},
},
servePort: 4040,
serveType: serveTypeTCP,
},
{
name: "same_fg_port",
desc: "error when overwriting a previous fg port",
cfg: &ipn.ServeConfig{
Foreground: map[string]*ipn.ServeConfig{
"abc123": {
TCP: map[uint16]*ipn.TCPPortHandler{
3000: {HTTPS: true},
},
},
},
},
servePort: 3000,
serveType: serveTypeTCP,
wantErr: true,
},
{
name: "new_service_tcp",
desc: "no error when adding a new service port",
cfg: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
TCP: map[uint16]*ipn.TCPPortHandler{80: {HTTP: true}},
},
},
},
svc: "svc:foo",
servePort: 8080,
serveType: serveTypeTCP,
},
{
name: "override_service_tcp",
desc: "no error when overwriting a previous service port",
cfg: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
TCP: map[uint16]*ipn.TCPPortHandler{
443: {TCPForward: "http://localhost:4545"},
},
},
},
},
svc: "svc:foo",
servePort: 443,
serveType: serveTypeTCP,
},
{
name: "override_service_tcp",
desc: "error when overwriting a previous service port with a different serve type",
cfg: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
TCP: map[uint16]*ipn.TCPPortHandler{
443: {HTTPS: true},
},
},
},
},
svc: "svc:foo",
servePort: 443,
serveType: serveTypeHTTP,
wantErr: true,
},
{
name: "override_service_tcp",
desc: "error when setting previous tcp service to tun mode",
cfg: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
TCP: map[uint16]*ipn.TCPPortHandler{
443: {TCPForward: "http://localhost:4545"},
},
},
},
},
svc: "svc:foo",
serveType: serveTypeTUN,
wantErr: true,
},
{
name: "override_service_tun",
desc: "error when setting previous tun service to tcp forwarder",
cfg: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:foo": {
Tun: true,
},
},
},
svc: "svc:foo",
serveType: serveTypeTCP,
servePort: 443,
wantErr: true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
se := serveEnv{bg: tc.bg}
err := se.validateConfig(tc.cfg, tc.servePort, tc.serveType, tc.svc)
if err == nil && tc.wantErr {
t.Fatal("expected an error but got nil")
}
if err != nil && !tc.wantErr {
t.Fatalf("expected no error but got: %v", err)
}
})
}
}
func TestSrcTypeFromFlags(t *testing.T) { func TestSrcTypeFromFlags(t *testing.T) {
tests := []struct { tests := []struct {
name string name string
@ -1130,6 +1121,118 @@ func TestSrcTypeFromFlags(t *testing.T) {
} }
} }
func TestAcceptSetAppCapsFlag(t *testing.T) {
testCases := []struct {
name string
inputs []string
expectErr bool
expectErrToMatch *regexp.Regexp
expectedValue []tailcfg.PeerCapability
}{
{
name: "valid_simple",
inputs: []string{"example.com/name"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"example.com/name"},
},
{
name: "valid_unicode",
inputs: []string{"bücher.de/something"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"bücher.de/something"},
},
{
name: "more_valid_unicode",
inputs: []string{"example.tw/某某某"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"example.tw/某某某"},
},
{
name: "valid_path_slashes",
inputs: []string{"domain.com/path/to/name"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"domain.com/path/to/name"},
},
{
name: "valid_multiple_sets",
inputs: []string{"one.com/foo,two.com/bar"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"one.com/foo", "two.com/bar"},
},
{
name: "valid_empty_string",
inputs: []string{""},
expectErr: false,
expectedValue: nil, // Empty string should be a no-op and not append anything.
},
{
name: "invalid_path_chars",
inputs: []string{"domain.com/path_with_underscore"},
expectErr: true,
expectErrToMatch: regexp.MustCompile(`"domain.com/path_with_underscore"`),
expectedValue: nil, // Slice should remain empty.
},
{
name: "valid_subdomain",
inputs: []string{"sub.domain.com/name"},
expectErr: false,
expectedValue: []tailcfg.PeerCapability{"sub.domain.com/name"},
},
{
name: "invalid_no_path",
inputs: []string{"domain.com/"},
expectErr: true,
expectErrToMatch: regexp.MustCompile(`"domain.com/"`),
expectedValue: nil,
},
{
name: "invalid_no_domain",
inputs: []string{"/path/only"},
expectErr: true,
expectErrToMatch: regexp.MustCompile(`"/path/only"`),
expectedValue: nil,
},
{
name: "some_invalid_some_valid",
inputs: []string{"one.com/foo,bad/bar,two.com/baz"},
expectErr: true,
expectErrToMatch: regexp.MustCompile(`"bad/bar"`),
expectedValue: []tailcfg.PeerCapability{"one.com/foo"}, // Parsing will stop after first error
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var v []tailcfg.PeerCapability
flag := &acceptAppCapsFlag{Value: &v}
var err error
for _, s := range tc.inputs {
err = flag.Set(s)
if err != nil {
break
}
}
if tc.expectErr && err == nil {
t.Errorf("expected an error, but got none")
}
if tc.expectErrToMatch != nil {
if !tc.expectErrToMatch.MatchString(err.Error()) {
t.Errorf("expected error to match %q, but was %q", tc.expectErrToMatch, err)
}
}
if !tc.expectErr && err != nil {
t.Errorf("did not expect an error, but got: %v", err)
}
if !reflect.DeepEqual(tc.expectedValue, v) {
t.Errorf("unexpected value, got: %q, want: %q", v, tc.expectedValue)
}
})
}
}
func TestCleanURLPath(t *testing.T) { func TestCleanURLPath(t *testing.T) {
tests := []struct { tests := []struct {
input string input string
@ -1662,7 +1765,7 @@ func TestIsLegacyInvocation(t *testing.T) {
} }
if gotTranslation != tt.translation { if gotTranslation != tt.translation {
t.Fatalf("expected translaction to be %q but got %q", tt.translation, gotTranslation) t.Fatalf("expected translation to be %q but got %q", tt.translation, gotTranslation)
} }
}) })
} }
@ -1682,6 +1785,7 @@ func TestSetServe(t *testing.T) {
mountPath string mountPath string
target string target string
allowFunnel bool allowFunnel bool
proxyProtocol int
expected *ipn.ServeConfig expected *ipn.ServeConfig
expectErr bool expectErr bool
}{ }{
@ -1966,7 +2070,7 @@ func TestSetServe(t *testing.T) {
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
err := e.setServe(tt.cfg, tt.dnsName, tt.srvType, tt.srvPort, tt.mountPath, tt.target, tt.allowFunnel, magicDNSSuffix) err := e.setServe(tt.cfg, tt.dnsName, tt.srvType, tt.srvPort, tt.mountPath, tt.target, tt.allowFunnel, magicDNSSuffix, nil, tt.proxyProtocol)
if err != nil && !tt.expectErr { if err != nil && !tt.expectErr {
t.Fatalf("got error: %v; did not expect error.", err) t.Fatalf("got error: %v; did not expect error.", err)
} }
@ -2249,3 +2353,8 @@ func exactErrMsg(want error) func(error) string {
return fmt.Sprintf("\ngot: %v\nwant: %v\n", got, want) return fmt.Sprintf("\ngot: %v\nwant: %v\n", got, want)
} }
} }
func ptrToReadOnlySlice[T any](s []T) *views.Slice[T] {
vs := views.SliceOf(s)
return &vs
}

@ -11,6 +11,7 @@ import (
"net/netip" "net/netip"
"os/exec" "os/exec"
"runtime" "runtime"
"slices"
"strconv" "strconv"
"strings" "strings"
@ -25,6 +26,7 @@ import (
"tailscale.com/types/opt" "tailscale.com/types/opt"
"tailscale.com/types/ptr" "tailscale.com/types/ptr"
"tailscale.com/types/views" "tailscale.com/types/views"
"tailscale.com/util/set"
"tailscale.com/version" "tailscale.com/version"
) )
@ -63,8 +65,10 @@ type setArgsT struct {
reportPosture bool reportPosture bool
snat bool snat bool
statefulFiltering bool statefulFiltering bool
sync bool
netfilterMode string netfilterMode string
relayServerPort string relayServerPort string
relayServerStaticEndpoints string
} }
func newSetFlagSet(goos string, setArgs *setArgsT) *flag.FlagSet { func newSetFlagSet(goos string, setArgs *setArgsT) *flag.FlagSet {
@ -85,7 +89,9 @@ func newSetFlagSet(goos string, setArgs *setArgsT) *flag.FlagSet {
setf.BoolVar(&setArgs.updateApply, "auto-update", false, "automatically update to the latest available version") setf.BoolVar(&setArgs.updateApply, "auto-update", false, "automatically update to the latest available version")
setf.BoolVar(&setArgs.reportPosture, "report-posture", false, "allow management plane to gather device posture information") setf.BoolVar(&setArgs.reportPosture, "report-posture", false, "allow management plane to gather device posture information")
setf.BoolVar(&setArgs.runWebClient, "webclient", false, "expose the web interface for managing this node over Tailscale at port 5252") setf.BoolVar(&setArgs.runWebClient, "webclient", false, "expose the web interface for managing this node over Tailscale at port 5252")
setf.BoolVar(&setArgs.sync, "sync", false, hidden+"actively sync configuration from the control plane (set to false only for network failure testing)")
setf.StringVar(&setArgs.relayServerPort, "relay-server-port", "", "UDP port number (0 will pick a random unused port) for the relay server to bind to, on all interfaces, or empty string to disable relay server functionality") setf.StringVar(&setArgs.relayServerPort, "relay-server-port", "", "UDP port number (0 will pick a random unused port) for the relay server to bind to, on all interfaces, or empty string to disable relay server functionality")
setf.StringVar(&setArgs.relayServerStaticEndpoints, "relay-server-static-endpoints", "", "static IP:port endpoints to advertise as candidates for relay connections (comma-separated, e.g. \"[2001:db8::1]:40000,192.0.2.1:40000\") or empty string to not advertise any static endpoints")
ffcomplete.Flag(setf, "exit-node", func(args []string) ([]string, ffcomplete.ShellCompDirective, error) { ffcomplete.Flag(setf, "exit-node", func(args []string) ([]string, ffcomplete.ShellCompDirective, error) {
st, err := localClient.Status(context.Background()) st, err := localClient.Status(context.Background())
@ -108,7 +114,7 @@ func newSetFlagSet(goos string, setArgs *setArgsT) *flag.FlagSet {
switch goos { switch goos {
case "linux": case "linux":
setf.BoolVar(&setArgs.snat, "snat-subnet-routes", true, "source NAT traffic to local routes advertised with --advertise-routes") setf.BoolVar(&setArgs.snat, "snat-subnet-routes", true, "source NAT traffic to local routes advertised with --advertise-routes")
setf.BoolVar(&setArgs.statefulFiltering, "stateful-filtering", false, "apply stateful filtering to forwarded packets (subnet routers, exit nodes, etc.)") setf.BoolVar(&setArgs.statefulFiltering, "stateful-filtering", false, "apply stateful filtering to forwarded packets (subnet routers, exit nodes, and so on)")
setf.StringVar(&setArgs.netfilterMode, "netfilter-mode", defaultNetfilterMode(), "netfilter mode (one of on, nodivert, off)") setf.StringVar(&setArgs.netfilterMode, "netfilter-mode", defaultNetfilterMode(), "netfilter mode (one of on, nodivert, off)")
case "windows": case "windows":
setf.BoolVar(&setArgs.forceDaemon, "unattended", false, "run in \"Unattended Mode\" where Tailscale keeps running even after the current GUI user logs out (Windows-only)") setf.BoolVar(&setArgs.forceDaemon, "unattended", false, "run in \"Unattended Mode\" where Tailscale keeps running even after the current GUI user logs out (Windows-only)")
@ -149,6 +155,7 @@ func runSet(ctx context.Context, args []string) (retErr error) {
OperatorUser: setArgs.opUser, OperatorUser: setArgs.opUser,
NoSNAT: !setArgs.snat, NoSNAT: !setArgs.snat,
ForceDaemon: setArgs.forceDaemon, ForceDaemon: setArgs.forceDaemon,
Sync: opt.NewBool(setArgs.sync),
AutoUpdate: ipn.AutoUpdatePrefs{ AutoUpdate: ipn.AutoUpdatePrefs{
Check: setArgs.updateCheck, Check: setArgs.updateCheck,
Apply: opt.NewBool(setArgs.updateApply), Apply: opt.NewBool(setArgs.updateApply),
@ -242,7 +249,22 @@ func runSet(ctx context.Context, args []string) (retErr error) {
if err != nil { if err != nil {
return fmt.Errorf("failed to set relay server port: %v", err) return fmt.Errorf("failed to set relay server port: %v", err)
} }
maskedPrefs.Prefs.RelayServerPort = ptr.To(int(uport)) maskedPrefs.Prefs.RelayServerPort = ptr.To(uint16(uport))
}
if setArgs.relayServerStaticEndpoints != "" {
endpointsSet := make(set.Set[netip.AddrPort])
endpointsSplit := strings.Split(setArgs.relayServerStaticEndpoints, ",")
for _, s := range endpointsSplit {
ap, err := netip.ParseAddrPort(s)
if err != nil {
return fmt.Errorf("failed to set relay server static endpoints: %q is not a valid IP:port", s)
}
endpointsSet.Add(ap)
}
endpoints := endpointsSet.Slice()
slices.SortFunc(endpoints, netip.AddrPort.Compare)
maskedPrefs.Prefs.RelayServerStaticEndpoints = endpoints
} }
checkPrefs := curPrefs.Clone() checkPrefs := curPrefs.Clone()

@ -122,7 +122,7 @@ func newUpFlagSet(goos string, upArgs *upArgsT, cmd string) *flag.FlagSet {
switch goos { switch goos {
case "linux": case "linux":
upf.BoolVar(&upArgs.snat, "snat-subnet-routes", true, "source NAT traffic to local routes advertised with --advertise-routes") upf.BoolVar(&upArgs.snat, "snat-subnet-routes", true, "source NAT traffic to local routes advertised with --advertise-routes")
upf.BoolVar(&upArgs.statefulFiltering, "stateful-filtering", false, "apply stateful filtering to forwarded packets (subnet routers, exit nodes, etc.)") upf.BoolVar(&upArgs.statefulFiltering, "stateful-filtering", false, "apply stateful filtering to forwarded packets (subnet routers, exit nodes, and so on)")
upf.StringVar(&upArgs.netfilterMode, "netfilter-mode", defaultNetfilterMode(), "netfilter mode (one of on, nodivert, off)") upf.StringVar(&upArgs.netfilterMode, "netfilter-mode", defaultNetfilterMode(), "netfilter mode (one of on, nodivert, off)")
case "windows": case "windows":
upf.BoolVar(&upArgs.forceDaemon, "unattended", false, "run in \"Unattended Mode\" where Tailscale keeps running even after the current GUI user logs out (Windows-only)") upf.BoolVar(&upArgs.forceDaemon, "unattended", false, "run in \"Unattended Mode\" where Tailscale keeps running even after the current GUI user logs out (Windows-only)")
@ -137,7 +137,7 @@ func newUpFlagSet(goos string, upArgs *upArgsT, cmd string) *flag.FlagSet {
// Some flags are only for "up", not "login". // Some flags are only for "up", not "login".
upf.BoolVar(&upArgs.json, "json", false, "output in JSON format (WARNING: format subject to change)") upf.BoolVar(&upArgs.json, "json", false, "output in JSON format (WARNING: format subject to change)")
upf.BoolVar(&upArgs.reset, "reset", false, "reset unspecified settings to their default values") upf.BoolVar(&upArgs.reset, "reset", false, "reset unspecified settings to their default values")
upf.BoolVar(&upArgs.forceReauth, "force-reauth", false, "force reauthentication (WARNING: this will bring down the Tailscale connection and thus should not be done remotely over SSH or RDP)") upf.BoolVar(&upArgs.forceReauth, "force-reauth", false, "force reauthentication (WARNING: this may bring down the Tailscale connection and thus should not be done remotely over SSH or RDP)")
registerAcceptRiskFlag(upf, &upArgs.acceptedRisks) registerAcceptRiskFlag(upf, &upArgs.acceptedRisks)
} }
@ -388,7 +388,8 @@ func updatePrefs(prefs, curPrefs *ipn.Prefs, env upCheckEnv) (simpleUp bool, jus
if !env.upArgs.reset { if !env.upArgs.reset {
applyImplicitPrefs(prefs, curPrefs, env) applyImplicitPrefs(prefs, curPrefs, env)
if err := checkForAccidentalSettingReverts(prefs, curPrefs, env); err != nil { simpleUp, err = checkForAccidentalSettingReverts(prefs, curPrefs, env)
if err != nil {
return false, nil, err return false, nil, err
} }
} }
@ -420,11 +421,6 @@ func updatePrefs(prefs, curPrefs *ipn.Prefs, env upCheckEnv) (simpleUp bool, jus
tagsChanged := !reflect.DeepEqual(curPrefs.AdvertiseTags, prefs.AdvertiseTags) tagsChanged := !reflect.DeepEqual(curPrefs.AdvertiseTags, prefs.AdvertiseTags)
simpleUp = env.flagSet.NFlag() == 0 &&
curPrefs.Persist != nil &&
curPrefs.Persist.UserProfile.LoginName != "" &&
env.backendState != ipn.NeedsLogin.String()
justEdit := env.backendState == ipn.Running.String() && justEdit := env.backendState == ipn.Running.String() &&
!env.upArgs.forceReauth && !env.upArgs.forceReauth &&
env.upArgs.authKeyOrFile == "" && env.upArgs.authKeyOrFile == "" &&
@ -818,6 +814,7 @@ func upWorthyWarning(s string) bool {
strings.Contains(s, healthmsg.WarnAcceptRoutesOff) || strings.Contains(s, healthmsg.WarnAcceptRoutesOff) ||
strings.Contains(s, healthmsg.LockedOut) || strings.Contains(s, healthmsg.LockedOut) ||
strings.Contains(s, healthmsg.WarnExitNodeUsage) || strings.Contains(s, healthmsg.WarnExitNodeUsage) ||
strings.Contains(s, healthmsg.InMemoryTailnetLockState) ||
strings.Contains(strings.ToLower(s), "update available: ") strings.Contains(strings.ToLower(s), "update available: ")
} }
@ -889,6 +886,8 @@ func init() {
addPrefFlagMapping("advertise-connector", "AppConnector") addPrefFlagMapping("advertise-connector", "AppConnector")
addPrefFlagMapping("report-posture", "PostureChecking") addPrefFlagMapping("report-posture", "PostureChecking")
addPrefFlagMapping("relay-server-port", "RelayServerPort") addPrefFlagMapping("relay-server-port", "RelayServerPort")
addPrefFlagMapping("sync", "Sync")
addPrefFlagMapping("relay-server-static-endpoints", "RelayServerStaticEndpoints")
} }
func addPrefFlagMapping(flagName string, prefNames ...string) { func addPrefFlagMapping(flagName string, prefNames ...string) {
@ -924,7 +923,7 @@ func updateMaskedPrefsFromUpOrSetFlag(mp *ipn.MaskedPrefs, flagName string) {
if prefs, ok := prefsOfFlag[flagName]; ok { if prefs, ok := prefsOfFlag[flagName]; ok {
for _, pref := range prefs { for _, pref := range prefs {
f := reflect.ValueOf(mp).Elem() f := reflect.ValueOf(mp).Elem()
for _, name := range strings.Split(pref, ".") { for name := range strings.SplitSeq(pref, ".") {
f = f.FieldByName(name + "Set") f = f.FieldByName(name + "Set")
} }
f.SetBool(true) f.SetBool(true)
@ -966,10 +965,10 @@ type upCheckEnv struct {
// //
// mp is the mask of settings actually set, where mp.Prefs is the new // mp is the mask of settings actually set, where mp.Prefs is the new
// preferences to set, including any values set from implicit flags. // preferences to set, including any values set from implicit flags.
func checkForAccidentalSettingReverts(newPrefs, curPrefs *ipn.Prefs, env upCheckEnv) error { func checkForAccidentalSettingReverts(newPrefs, curPrefs *ipn.Prefs, env upCheckEnv) (simpleUp bool, err error) {
if curPrefs.ControlURL == "" { if curPrefs.ControlURL == "" {
// Don't validate things on initial "up" before a control URL has been set. // Don't validate things on initial "up" before a control URL has been set.
return nil return false, nil
} }
flagIsSet := map[string]bool{} flagIsSet := map[string]bool{}
@ -977,10 +976,13 @@ func checkForAccidentalSettingReverts(newPrefs, curPrefs *ipn.Prefs, env upCheck
flagIsSet[f.Name] = true flagIsSet[f.Name] = true
}) })
if len(flagIsSet) == 0 { if len(flagIsSet) == 0 &&
curPrefs.Persist != nil &&
curPrefs.Persist.UserProfile.LoginName != "" &&
env.backendState != ipn.NeedsLogin.String() {
// A bare "tailscale up" is a special case to just // A bare "tailscale up" is a special case to just
// mean bringing the network up without any changes. // mean bringing the network up without any changes.
return nil return true, nil
} }
// flagsCur is what flags we'd need to use to keep the exact // flagsCur is what flags we'd need to use to keep the exact
@ -1022,7 +1024,7 @@ func checkForAccidentalSettingReverts(newPrefs, curPrefs *ipn.Prefs, env upCheck
missing = append(missing, fmtFlagValueArg(flagName, valCur)) missing = append(missing, fmtFlagValueArg(flagName, valCur))
} }
if len(missing) == 0 { if len(missing) == 0 {
return nil return false, nil
} }
// Some previously provided flags are missing. This run of 'tailscale // Some previously provided flags are missing. This run of 'tailscale
@ -1055,7 +1057,7 @@ func checkForAccidentalSettingReverts(newPrefs, curPrefs *ipn.Prefs, env upCheck
fmt.Fprintf(&sb, " %s", a) fmt.Fprintf(&sb, " %s", a)
} }
sb.WriteString("\n\n") sb.WriteString("\n\n")
return errors.New(sb.String()) return false, errors.New(sb.String())
} }
// applyImplicitPrefs mutates prefs to add implicit preferences for the user operator. // applyImplicitPrefs mutates prefs to add implicit preferences for the user operator.

@ -85,6 +85,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/cmd/tailscale/cli from tailscale.com/cmd/tailscale tailscale.com/cmd/tailscale/cli from tailscale.com/cmd/tailscale
tailscale.com/cmd/tailscale/cli/ffcomplete from tailscale.com/cmd/tailscale/cli tailscale.com/cmd/tailscale/cli/ffcomplete from tailscale.com/cmd/tailscale/cli
tailscale.com/cmd/tailscale/cli/ffcomplete/internal from tailscale.com/cmd/tailscale/cli/ffcomplete tailscale.com/cmd/tailscale/cli/ffcomplete/internal from tailscale.com/cmd/tailscale/cli/ffcomplete
tailscale.com/cmd/tailscale/cli/jsonoutput from tailscale.com/cmd/tailscale/cli
tailscale.com/control/controlbase from tailscale.com/control/controlhttp+ tailscale.com/control/controlbase from tailscale.com/control/controlhttp+
tailscale.com/control/controlhttp from tailscale.com/control/ts2021 tailscale.com/control/controlhttp from tailscale.com/control/ts2021
tailscale.com/control/controlhttp/controlhttpcommon from tailscale.com/control/controlhttp tailscale.com/control/controlhttp/controlhttpcommon from tailscale.com/control/controlhttp
@ -171,7 +172,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/types/structs from tailscale.com/ipn+ tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/types/key+ tailscale.com/types/tkatype from tailscale.com/types/key+
tailscale.com/types/views from tailscale.com/tailcfg+ tailscale.com/types/views from tailscale.com/tailcfg+
tailscale.com/util/cibuild from tailscale.com/health tailscale.com/util/cibuild from tailscale.com/health+
tailscale.com/util/clientmetric from tailscale.com/net/netcheck+ tailscale.com/util/clientmetric from tailscale.com/net/netcheck+
tailscale.com/util/cloudenv from tailscale.com/net/dnscache+ tailscale.com/util/cloudenv from tailscale.com/net/dnscache+
tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy+ tailscale.com/util/cmpver from tailscale.com/net/tshttpproxy+

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save