Compare commits

...

146 Commits

Author SHA1 Message Date
Brad Fitzpatrick 72f736134d cmd/testwrapper/flakytest: skip flaky tests if TS_SKIP_FLAKY_TESTS set
This is for a future test scheduler, so it can run potentially flaky
tests separately, doing all the non-flaky ones together in one batch.

Updates tailscale/corp#28679

Change-Id: Ic4a11f9bf394528ef75792fd622f17bc01a4ec8a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
3 hours ago
Andrew Lytvynov d7d12761ba
Add .stignore for syncthing (#18540)
This symlink tells synchting to ignore stuff that's in .gitignore.

Updates https://github.com/tailscale/corp/issues/36250

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
20 hours ago
Brad Fitzpatrick 8f8236feb3 cmd/printdep: add --next flag to use rc Go build hash instead
Updates tailscale/corp#36382

Change-Id: Ib7474b0aab901e98f0fe22761e26fd181650743c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
21 hours ago
Brad Fitzpatrick a374cc344e tool/gocross, pull-toolchain.sh: support a "next" Go toolchain
When TS_GO_NEXT=1 is set, update/use the
go.toolchain.next.{branch,rev} files instead.

This lets us do test deploys of Go release candidates on some
backends, without affecting all backends.

Updates tailscale/corp#36382

Change-Id: I00dbde87b219b720be5ea142325c4711f101a364
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
22 hours ago
Cameron Stokes aac12ba799
cmd/tailscale/cli: add json output option to `switch --list` (#18501)
* cmd/tailscale/cli: add json output option to `switch --list`

Closes #14783

Signed-off-by: Cameron Stokes <cameron@tailscale.com>
22 hours ago
Alex Chan ae62569159 hostinfo: retrieve OS version for Macs running the OSS client
Updates #18520

Change-Id: If86a1f702c704b003002aa7e2f5a6b1418b469cc
Signed-off-by: Alex Chan <alexc@tailscale.com>
1 day ago
Amal Bansode 6de5b01e04
ipn/localapi: stop logging "broken pipe" errors (#18487)
The Tailscale CLI has some methods to watch the IPN bus for
messages, say, the current netmap (`tailscale debug netmap`).
The Tailscale daemon supports this using a streaming HTTP
response. Sometimes, the client can close its connection
abruptly -- due to an interruption, or in the case of `debug netmap`,
intentionally after consuming one message.

If the server daemon is writing a response as the client closes
its end of the socket, the daemon typically encounters a "broken pipe"
error. The "Watch IPN Bus" handler currently logs such errors after
they're propagated by a JSON encoding/writer helper.

Since the Tailscale CLI nominally closes its socket with the daemon
in this slightly ungraceful way (viz. `debug netmap`), stop logging
these broken pipe errors as far as possible. This will help avoid
confounding users when they scan backend logs.

Updates #18477

Signed-off-by: Amal Bansode <amal@tailscale.com>
2 days ago
M. J. Fromberger 9385dfe7f6
ipn/ipnlocal/netmapcache: add a package to split and cache network maps (#18497)
This commit is based on part of #17925, reworked as a separate package.

Add a package that can store and load netmap.NetworkMap values in persistent
storage, using a basic columnar representation. This commit includes a default
storage interface based on plain files, but the interface can be implemented
with more structured storage if we want to later.

The tests are set up to require that all the fields of the NetworkMap are
handled, except those explicitly designated as not-cached, and check that a
fully-populated value can round-trip correctly through the cache.  Adding or
removing fields, either in the NetworkMap or in the cached representation, will
trigger either build failures (e.g., for type mismatch) or test failures (e.g.,
for representation changes or missing fields). This isn't quite as nice as
automatically updating the representation, which I also prototyped, but is much
simpler to maintain and less code.

This commit does not yet hook up the cache to the backend, that will be a
subsequent change.

Updates #12639

Change-Id: Icb48639e1d61f2aec59904ecd172c73e05ba7bf9
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2 days ago
Harry Harpham 6e44cb6ab3 tsnet: make ListenService examples consistent with other tsnet examples
Fixes tailscale/corp#36365

Signed-off-by: Harry Harpham <harry@tailscale.com>
2 days ago
Andrew Dunham 8d875a301c net/dns: add test for DoH upgrade of system DNS
Someone asked me if we use DNS-over-HTTPS if the system's resolver is an
IP address that supports DoH and there's no global nameserver set (i.e.
no "Override DNS servers" set). I didn't know the answer offhand, and it
took a while for me to figure it out. The answer is yes, in cases where
we take over the system's DNS configuration and read the base config, we
do upgrade any DoH-capable resolver to use DoH. Here's a test that
verifies this behaviour (and hopefully helps as documentation the next
time someone has this question).

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
2 days ago
Andrew Dunham 0e1b2b15f1 net/dns/publicdns: support CIRA Canadian Shield
RELNOTE=Add DNS-over-HTTPS support for CIRA Canadian Shield

Fixes #18524

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
2 days ago
Fran Bull 9d13a6df9c appc,ipn/ipnlocal: Add split DNS entries for conn25 peers
If conn25 config is sent in the netmap: add split DNS entries to use
appropriately tagged peers' PeerAPI to resolve DNS requests for those
domains.

This will enable future work where we use the peers as connectors for
the configured domains.

Updates tailscale/corp#34252

Signed-off-by: Fran Bull <fran@tailscale.com>
2 days ago
James Tucker 1183f7a191 tstest/integration/testcontrol: fix unguarded read of DNS config
Fixes #18498

Signed-off-by: James Tucker <james@tailscale.com>
4 days ago
License Updater 76839587eb licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
5 days ago
Andrew Lytvynov bfa90ea9b3
go.toolchain.rev: update to Go 1.25.6 (#18507)
Updates #18506

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
5 days ago
Nick Khyl 2a69f48541 wf: allow limited broadcast to/from permitted interfaces when using an exit node on Windows
Similarly to allowing link-local multicast in #13661, we should also allow broadcast traffic
on permitted interfaces when the killswitch is enabled due to exit node usage on Windows.
This always includes internal interfaces, such as Hyper-V/WSL2, and also the LAN when
"Allow local network access" is enabled in the client.

Updates #18504

Signed-off-by: Nick Khyl <nickk@tailscale.com>
5 days ago
Will Norris 3ec5be3f51 all: remove AUTHORS file and references to it
This file was never truly necessary and has never actually been used in
the history of Tailscale's open source releases.

A Brief History of AUTHORS files
---

The AUTHORS file was a pattern developed at Google, originally for
Chromium, then adopted by Go and a bunch of other projects. The problem
was that Chromium originally had a copyright line only recognizing
Google as the copyright holder. Because Google (and most open source
projects) do not require copyright assignemnt for contributions, each
contributor maintains their copyright. Some large corporate contributors
then tried to add their own name to the copyright line in the LICENSE
file or in file headers. This quickly becomes unwieldy, and puts a
tremendous burden on anyone building on top of Chromium, since the
license requires that they keep all copyright lines intact.

The compromise was to create an AUTHORS file that would list all of the
copyright holders. The LICENSE file and source file headers would then
include that list by reference, listing the copyright holder as "The
Chromium Authors".

This also become cumbersome to simply keep the file up to date with a
high rate of new contributors. Plus it's not always obvious who the
copyright holder is. Sometimes it is the individual making the
contribution, but many times it may be their employer. There is no way
for the proejct maintainer to know.

Eventually, Google changed their policy to no longer recommend trying to
keep the AUTHORS file up to date proactively, and instead to only add to
it when requested: https://opensource.google/docs/releasing/authors.
They are also clear that:

> Adding contributors to the AUTHORS file is entirely within the
> project's discretion and has no implications for copyright ownership.

It was primarily added to appease a small number of large contributors
that insisted that they be recognized as copyright holders (which was
entirely their right to do). But it's not truly necessary, and not even
the most accurate way of identifying contributors and/or copyright
holders.

In practice, we've never added anyone to our AUTHORS file. It only lists
Tailscale, so it's not really serving any purpose. It also causes
confusion because Tailscalars put the "Tailscale Inc & AUTHORS" header
in other open source repos which don't actually have an AUTHORS file, so
it's ambiguous what that means.

Instead, we just acknowledge that the contributors to Tailscale (whoever
they are) are copyright holders for their individual contributions. We
also have the benefit of using the DCO (developercertificate.org) which
provides some additional certification of their right to make the
contribution.

The source file changes were purely mechanical with:

    git ls-files | xargs sed -i -e 's/\(Tailscale Inc &\) AUTHORS/\1 contributors/g'

Updates #cleanup

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
5 days ago
M. J. Fromberger ce12863ee5
ipn/ipnlocal: manage per-profile subdirectories in TailscaleVarRoot (#18485)
In order to better manage per-profile data resources on the client, add methods
to the LocalBackend to support creation of per-profile directory structures in
local storage. These methods build on the existing TailscaleVarRoot config, and
have the same limitation (i.e., if no local storage is available, it will
report an error when used).

The immediate motivation is to support netmap caching, but we can also use this
mechanism for other per-profile resources including pending taildrop files and
Tailnet Lock authority caches.

This commit only adds the directory-management plumbing; later commits will
handle migrating taildrop, TKA, etc. to this mechanism, as well as caching
network maps.

Updates #12639

Change-Id: Ia75741955c7bf885e49c1ad99f856f669a754169
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
5 days ago
Francois Marier df54751725
scripts/installer.sh: allow running dnf5 install script twice (#18492)
`dnf config-manager addrepo` will fail if the Tailscale repo is already
installed. Without the --overwrite flag, the installer will error out
instead of succeeding like with dnf3.

Fixes #18491

Signed-off-by: Francois Marier <francois@fmarier.org>
5 days ago
James Tucker 63d563e734 tsnet: add support for a user-supplied tun.Device
tsnet users can now provide a tun.Device, including any custom
implementation that conforms to the interface.

netstack has a new option CheckLocalTransportEndpoints that when used
alongside a TUN enables netstack listens and dials to correctly capture
traffic associated with those sockets. tsnet with a TUN sets this
option, while all other builds leave this at false to preserve existing
performance.

Updates #18423

Signed-off-by: James Tucker <james@tailscale.com>
6 days ago
Harry Harpham c062230cce tsnet: clarify that ListenService starts the server if necessary
Every other listen method on tsnet.Server makes this clarification, so
should ListenService.

Fixes tailscale/corp#36207
Signed-off-by: Harry Harpham <harry@tailscale.com>
6 days ago
Claus Lensbøl 151644f647
wgengine: send disco key via TSMP on first contact (#18215)
When we have not yet communicated with a peer, send a
TSMPDiscoAdvertisement to let the peer know of our disco key. This is in
most cases redundant, but will allow us to set up direct connections
when the client cannot access control.

Some parts taken from: #18073

Updates #12639

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
6 days ago
Alex Valiushko 4b7585df77
net/udprelay: add tailscaled_peer_relay_endpoints gauge (#18265)
New gauge reflects endpoints state via labels:
- open, when both peers are connected and ready to talk, and
- connecting. when at least one peer hasn't connected yet.

Corresponding client metrics are logged as
- udprelay_endpoints_connecting
- udprelay_endpoints_open

Updates tailscale/corp#30820

Change-Id: Idb1baa90a38c97847e14f9b2390093262ad0ea23

Signed-off-by: Alex Valiushko <alexvaliushko@tailscale.com>
7 days ago
Josh Bleecher Snyder 6dc0bd834c util/limiter: don't panic when dumping a new Limiter
Fixes #18439

Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
7 days ago
David Bond 2cb86cf65e
cmd/k8s-operator,k8s-operator: Allow the use of multiple tailnets (#18344)
This commit contains  the implementation of multi-tailnet support within the Kubernetes Operator

Each of our custom resources now expose the `spec.tailnet` field. This field is a string that must match the name of an existing `Tailnet` resource. A `Tailnet` resource looks like this:

```yaml
apiVersion: tailscale.com/v1alpha1
kind: Tailnet
metadata:
  name: example  # This is the name that must be referenced by other resources
spec:
  credentials:
    secretName: example-oauth
```

Each `Tailnet` references a `Secret` resource that contains a set of oauth credentials. This secret must be created in the same namespace as the operator:

```yaml
apiVersion: v1
kind: Secret
metadata:
  name: example-oauth # This is the name that's referenced by the Tailnet resource.
  namespace: tailscale
stringData:
  client_id: "client-id"
  client_secret: "client-secret"
```

When created, the operator performs a basic check that the oauth client has access to all required scopes. This is done using read actions on devices, keys & services. While this doesn't capture a missing "write" permission, it catches completely missing permissions. Once this check passes, the `Tailnet` moves into a ready state and can be referenced. Attempting to use a `Tailnet` in a non-ready state will stall the deployment of `Connector`s, `ProxyGroup`s and `Recorder`s until the `Tailnet` becomes ready.

The `spec.tailnet` field informs the operator that a `Connector`, `ProxyGroup`, or `Recorder` must be given an auth key generated using the specified oauth client. For backwards compatibility, the set of credentials the operator is configured with are considered the default. That is, where `spec.tailnet` is not set, the resource will be deployed in the same tailnet as the operator. 

Updates https://github.com/tailscale/corp/issues/34561
1 week ago
Jonathan Nobels e30626c480
version: add support for reporting the mac variant from tailscale --version (#18462)
fixes tailscale/corp#27182

tailscale version --json now includes an osVariant field that will report
one of macsys, appstore or darwin. We can extend this to other
platforms where tailscaled can have multiple personalities.

This also adds the concept of a platform-specific callback for querying
an explicit application identifier.  On Apple, we can use
CFBundleGetIdentifier(mainBundle) to get the bundle identifier via cgo.
This removes all the ambiguity and lets us remove other less direct
methods (like env vars, locations, etc).

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
1 week ago
Alex Valiushko 0a5639dcc0
net/udprelay: advertise addresses from cloud metadata service (#18368)
Polls IMDS (currently only AWS) for extra IPs to advertise as udprelay.

Updates #17796

Change-Id: Iaaa899ef4575dc23b09a5b713ce6693f6a6a6964

Signed-off-by: Alex Valiushko <alexvaliushko@tailscale.com>
1 week ago
Tom Meadows 7213b35d85
k8s-operator,kube: remove enableSessionRecording from Kubernetes Cap Map (#18452)
* k8s-operator,kube: removing enableSessionRecordings option. It seems
like it is going to create a confusing user experience and it's going to
be a very niche use case, so we have decided to defer this for now.

Updates tailscale/corp#35796

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>

* k8s-operator: adding metric for env var deprecation

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>

---------

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
1 week ago
Eduardo Sorribas 7676030355
net/portmapper: Stop replacing the internal port with the upnp external port (#18349)
net/portmapper: Stop replacing the internal port with the upnp external port

This causes the UPnP mapping to break in the next recreation of the
mapping.

Fixes #18348

Signed-off-by: Eduardo Sorribas <eduardo@sorribas.org>
1 week ago
Harry Harpham 3840183be9 tsnet: add support for Services
This change allows tsnet nodes to act as Service hosts by adding a new
function, tsnet.Server.ListenService. Invoking this function will
advertise the node as a host for the Service and create a listener to
receive traffic for the Service.

Fixes #17697
Fixes tailscale/corp#27200
Signed-off-by: Harry Harpham <harry@tailscale.com>
2 weeks ago
Harry Harpham 1b88e93ff5 ipn/ipnlocal: allow retrieval of serve config ETags from local API
This change adds API to ipn.LocalBackend to retrieve the ETag when
querying for the current serve config. This allows consumers of
ipn.LocalBackend.SetServeConfig to utilize the concurrency control
offered by ETags. Previous to this change, utilizing serve config ETags
required copying the local backend's internal ETag calcuation.

The local API server was previously copying the local backend's ETag
calculation as described above. With this change, the local API server
now uses the new ETag retrieval function instead. Serve config ETags are
therefore now opaque to clients, in line with best practices.

Fixes tailscale/corp#35857
Signed-off-by: Harry Harpham <harry@tailscale.com>
2 weeks ago
Jonathan Nobels 643e91f2eb
net/netmon: move TailscaleInterfaceIndex out of netmon.State (#18428)
fixes tailscale/tailscale#18418

Both Serve and PeerAPI broke when we moved the TailscaleInterfaceName
into State, which is updated asynchronously and may not be
available when we configure the listeners.

This extracts the explicit interface name property from netmon.State
and adds as a static struct with getters that have proper error
handling.

The bug is only found in sandboxed Darwin clients, where we
need to know the Tailscale interface details in order to set up the
listeners correctly (they must bind to our interface explicitly to escape
the network sandboxing that is applied by NECP).

Currently set only sandboxed macOS and Plan9 set this but it will
also be useful on Windows to simplify interface filtering in netns.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2 weeks ago
Nick Khyl 1478028591 docs/windows/policy: use a separate value to track the configuration state of EnableDNSRegistration
Policy editors, such as gpedit.msc and gpme.msc, rely on both the presence and the value of the
registry value to determine whether a policy is enabled. Unless an enabledValue is specified
explicitly, it defaults to REG_DWORD 1.

Therefore, we cannot rely on the same registry value to track the policy configuration state when
it is already used by a policy option, such as a dropdown. Otherwise, while the policy setting
will be written and function correctly, it will appear as Not Configured in the policy editor
due to the value mismatch (for example, REG_SZ "always" vs REG_DWORD 1).

In this PR, we update the DNSRegistration policy setting to use the DNSRegistrationConfigured
registry value for tracking. This change has no effect on the client side and exists solely to
satisfy ADMX and policy editor requirements.

Updates #14917

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 weeks ago
Tom Meadows 1cc6f3282e
k8s-operator,kube: allowing k8s api request events to be enabled via grants (#18393)
Updates #35796

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
2 weeks ago
Aaron Klotz 54d77898da tool/gocross: update gocross-wrapper.ps1 to use absolute path for resolving tar
gocross-wrapper.ps1 is written to use the version of tar that ships with
Windows; we want to avoid conflicts with any other tar on the PATH, such
ones installed by MSYS and/or Cygwin.

Updates https://github.com/tailscale/corp/issues/29940

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2 weeks ago
Nick O'Neill 1a79abf5fb
VERSION.txt: this is v1.95.0 (#18414)
Signed-off-by: Nick O'Neill <nick@tailscale.com>
2 weeks ago
Simon Law 5aeee1d8a5
.github/workflows: double the timeout for golangci-lint (#18404)
Recently, the golangci-lint workflow has been taking longer and longer
to complete, causing it to timeout after the default of 5 minutes.

    Running error: context loading failed: failed to load packages: failed to load packages: failed to load with go/packages: context deadline exceeded
    Timeout exceeded: try increasing it by passing --timeout option

Although PR #18398 enabled the Go module cache, bootstrapping with a
cold cache still takes too long.

This PR doubles the default 5 minute timeout for golangci-lint to 10
minutes so that golangci-lint can finish downloading all of its
dependencies.

Note that this doesn’t affect the 5 minute timeout configured in
.golangci.yml, since running golangci-lint on your local instance
should still be plenty fast.

Fixes #18366

Signed-off-by: Simon Law <sfllaw@tailscale.com>
2 weeks ago
Tom Meadows c3b7f24051
ipn,ipn/local: always accept routes for Tailscale Services (cgnat range) (#18173)
Updates #18198

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Co-authored-by: James Tucker <raggi@tailscale.com>
2 weeks ago
Mario Minardi e9d82767e5 cmd/containerboot: allow for automatic ID token generation
Allow for optionally specifying an audience for containerboot. This is
passed to tailscale up to allow for containerboot to use automatic ID
token generation for authentication.

Updates https://github.com/tailscale/corp/issues/34430

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 weeks ago
Mario Minardi 02af7c963c tsnet: allow for automatic ID token generation
Allow for optionally specifiying an audience for tsnet. This is passed
to the underlying identity federation logic to allow for tsnet auth to
use automatic ID token generation for authentication.

Updates https://github.com/tailscale/corp/issues/33316

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 weeks ago
Irbe Krumina 28f163542c
.github/actions/go-cache: build cigocacher using remote path, fall back to ./tool/go (#18409)
If local tailscale/tailscale checkout is not available,
pulll cigocacher remotely.
Fall back to ./tool/go if no other Go installation
is present.

Updates tailscale/corp#32493

Signed-off-by: Irbe Krumina <irbekrm@gmail.com>
2 weeks ago
Danni Popova 6a6aa805d6
cmd,feature: add identity token auto generation for workload identity (#18373)
Adds the ability to detect what provider the client is running on and tries fetch the ID token to use with Workload Identity.

Updates https://github.com/tailscale/corp/issues/33316

Signed-off-by: Danni Popova <danni@tailscale.com>
2 weeks ago
Anton Tolchanov 58042e2de3 metrics: add a NewSet and Set.NewLabelMap helpers
Updates tailscale/corp#31174

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 weeks ago
Anton Tolchanov 17b0c7bfb3 metrics: add a NewLabelMap helper to create and register label maps
Updates tailscale/corp#31174

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 weeks ago
Simon Law 76fb09c6bd
.github/workflows: fix timeouts by caching packages for golangci-lint (#18398)
Recently, the golangci-lint workflow has been taking longer and longer
to complete, causing it to timeout after the default of 5 minutes.

    Running error: context loading failed: failed to load packages: failed to load packages: failed to load with go/packages: context deadline exceeded
    Timeout exceeded: try increasing it by passing --timeout option

This PR upgrades actions/setup-go to version 6, the latest, and
enables caching for Go modules and build outputs. This should speed up
linting because most packages won’t have to be downloaded over and
over again.

Fixes #18366

Signed-off-by: Simon Law <sfllaw@tailscale.com>
2 weeks ago
Irbe Krumina 8c17d871b3
ipn/store/kubestore: don't load write replica certs in memory (#18395)
Fixes a bug where, for kube HA proxies, TLS certs for the replica
responsible for cert issuance where loaded in memory on startup,
although the in-memory store was not updated after renewal (to
avoid failing re-issuance for re-created Ingresses).
Now the 'write' replica always reads certs from the kube Secret.

Updates tailscale/tailscale#18394

Signed-off-by: Irbe Krumina <irbekrm@gmail.com>
2 weeks ago
Harry Harpham 87e108e10c docs: add instructions on referencing pull requests in commit messages
Updates #cleanup
Signed-off-by: Harry Harpham <harry@tailscale.com>
2 weeks ago
Harry Harpham 78c8d14254 tsnet: use errors.Join and idiomatic field order
Updates #18376 (follow up on feedback)
Signed-off-by: Harry Harpham <harry@tailscale.com>
2 weeks ago
Raj Singh aadc4f2ef4
wgengine/magicsock: add home DERP region usermetric (#18062)
Expose the node's home DERP region ID as a Prometheus gauge via the
usermetrics endpoint.

Fixes #18061

Signed-off-by: Raj Singh <raj@tailscale.com>
3 weeks ago
Patrick O'Doherty 5db95ec376
go.mod: bump github.com/containerd/containerd@v1.7.29 (#18374)
Updates #cleanup

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
3 weeks ago
Harry Harpham 3c1be083a4 tsnet: ensure funnel listener cleans up after itself when closed
Previously the funnel listener would leave artifacts in the serve
config. This caused weird out-of-sync effects like the admin panel
showing that funnel was enabled for a node, but the node rejecting
packets because the listener was closed.

This change resolves these synchronization issues by ensuring that
funnel listeners clean up the serve config when closed.

See also:
e109cf9fdd

Updates #cleanup
Signed-off-by: Harry Harpham <harry@tailscale.com>
3 weeks ago
Harry Harpham f9762064cf tsnet: reset serve config only once
Prior to this change, we were resetting the tsnet's serve config every
time tsnet.Server.Up was run. This is important to do on startup, to
prevent messy interactions with stale configuration when the code has
changed.

However, Up is frequently run as a just-in-case step (for example, by
Server.ListenTLS/ListenFunnel and possibly by consumers of tsnet). When
the serve config is reset on each of these calls to Up, this creates
situations in which the serve config disappears unexpectedly. The
solution is to reset the serve config only on the first call to Up.

Fixes #8800
Updates tailscale/corp#27200
Signed-off-by: Harry Harpham <harry@tailscale.com>
3 weeks ago
Jordan Whited 5f34f14e14 net/udprelay: apply netns Control func to server socket(s)
To prevent peer relay servers from sending packets *over* Tailscale.

Updates tailscale/corp#35651

Signed-off-by: Jordan Whited <jordan@tailscale.com>
3 weeks ago
Mario Minardi 4c37141ab7 cmd,internal,feature: add workload idenity support to gitops pusher
Add support for authenticating the gitops-pusher using workload identity
federation.

Updates https://github.com/tailscale/corp/issues/34172

Signed-off-by: Mario Minardi <mario@tailscale.com>
3 weeks ago
Simon Law 3e45e5b420
feature/featuretags: make QR codes modular (#18358)
QR codes are used by `tailscale up --qr` to provide an easy way to
open a web-page without transcribing a difficult URI. However, there’s
no need for this feature if the client will never be called
interactively. So this PR adds the `ts_omit_qrcodes` build tag.

Updates #18182

Signed-off-by: Simon Law <sfllaw@tailscale.com>
3 weeks ago
Andrew Dunham 6aac87a84c net/portmapper, go.mod: unfork our goupnp dependency
Updates #7436

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
3 weeks ago
Tom Proctor 5019dc8eb2
go.mod: bump mkctr dep (#18365)
Brings in tailscale/mkctr#29.

Updates tailscale/corp#32085

Change-Id: I90160ed1cdc47118ac8fd0712d63a7b590e739d3

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
3 weeks ago
Tom Proctor 5be02ee6f8 cmd/k8s-operator/e2e,go.mod: remove client v2 dependency
It's not worth adding the v2 client just for these e2e tests. Remove
that dependency for now to keep a clear separation, but we should revive
the v2 client version if we ever decide to take that dependency for the
tailscale/tailscale repo as a whole.

Updates tailscale/corp#32085

Change-Id: Ic51ce233d5f14ce2d25f31a6c4bb9cf545057dd0
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
3 weeks ago
Tom Proctor 73cb3b491e
cmd/k8s-operator/e2e: run self-contained e2e tests with devcontrol (#17415)
* cmd/k8s-operator/e2e: run self-contained e2e tests with devcontrol

Adds orchestration for more of the e2e testing setup requirements to
make it easier to run them in CI, but also run them locally in a way
that's consistent with CI. Requires running devcontrol, but otherwise
supports creating all the scaffolding required to exercise the operator
and proxies.

Updates tailscale/corp#32085

Change-Id: Ia7bff38af3801fd141ad17452aa5a68b7e724ca6
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>

* cmd/k8s-operator/e2e: being more specific on tmp dir cleanup

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>

---------

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Co-authored-by: chaosinthecrd <tom@tmlabs.co.uk>
3 weeks ago
Simon Law 522a6e385e
cmd/tailscale/cli, util/qrcodes: format QR codes on Linux consoles (#18182)
Raw Linux consoles support UTF-8, but we cannot assume that all UTF-8
characters are available. The default Fixed and Terminus fonts don’t
contain half-block characters (`▀` and `▄`), but do contain the
full-block character (`█`).

Sometimes, Linux doesn’t have a framebuffer, so it falls back to VGA.
When this happens, the full-block character could be anywhere in
extended ASCII block, because we don’t know which code page is active.

This PR introduces `--qr-format=auto` which tries to heuristically
detect when Tailscale is printing to a raw Linux console, whether
UTF-8 is enabled, and which block characters have been mapped in the
console font.

If Unicode characters are unavailable, the new `--qr-format=ascii`
formatter uses `#` characters instead of full-block characters.

Fixes #12935

Signed-off-by: Simon Law <sfllaw@tailscale.com>
3 weeks ago
Raj Singh e66531041b
cmd/containerboot: add OAuth and WIF auth support (#18311)
Fixes tailscale/corp#34430

Signed-off-by: Raj Singh <raj@tailscale.com>
3 weeks ago
Andrew Lytvynov 6c67deff38
cmd/distsign: add CLI for verifying package signatures (#18239)
Updates #35374

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Naman Sood 480ee9fec0
ipn,cmd/tailscale/cli: set correct SNI name for TLS-terminated TCP Services (#17752)
Fixes #17749.

Signed-off-by: Naman Sood <mail@nsood.in>
3 weeks ago
Alex Valiushko 4c3cf8bb11
wgengine/magicsock: extract IMDS utilities into a standalone package (#18334)
Moves magicksock.cloudInfo into util/cloudinfo with minimal changes.

Updates #17796

Change-Id: I83f32473b9180074d5cdbf00fa31e5b3f579f189

Signed-off-by: Alex Valiushko <alexvaliushko@tailscale.com>
3 weeks ago
Mario Minardi a662c541ab .github/workflows: bump create-pull-request to 8.0.0
Bump peter-evans/create-pull-request to 8.0.0 to ensure compatibility
with actions/checkout 6.x.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
3 weeks ago
dependabot[bot] 9a6282b515 .github: Bump actions/checkout from 4.2.2 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](11bd71901b...08c6903cd8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
3 weeks ago
Harry Harpham 7de1b0b330
cmd/tailscale/cli: remove Services-specific subcommands from funnel (#18225)
The funnel command is sort of an alias for the serve command. This means
that the subcommands added to serve to support Services appear as
subcommands for funnel as well, despite having no meaning for funnel.
This change removes all such Services-specific subcommands from funnel.

Fixes tailscale/corp#34167

Signed-off-by: Harry Harpham <harry@tailscale.com>
3 weeks ago
Irbe Krumina 8ea90ba80d
cmd/tailscaled,ipn/{ipnlocal,store/kubestore}: don't create attestation keys for stores that are not bound to a node (#18322)
Ensure that hardware attestation keys are not added to tailscaled
state stores that are Kubernetes Secrets or AWS SSM as those Tailscale
devices should be able to be recreated on different nodes, for example,
when moving Pods between nodes.

Updates tailscale/tailscale#18302

Signed-off-by: Irbe Krumina <irbekrm@gmail.com>
3 weeks ago
Andrew Lytvynov 68617bb82e
cmd/tailscaled: disable state encryption / attestation by default (#18336)
TPM-based features have been incredibly painful due to the heterogeneous
devices in the wild, and many situations in which the TPM "changes" (is
reset or replaced). All of this leads to a lot of customer issues.

We hoped to iron out all the kinks and get all users to benefit from
state encryption and hardware attestation without manually opting in,
but the long tail of kinks is just too long.

This change disables TPM-based features on Windows and Linux by default.
Node state should get auto-decrypted on update, and old attestation keys
will be removed.

There's also tailscaled-on-macOS, but it won't have a TPM or Keychain
bindings anyway.

Updates #18302
Updates #15830

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
Andrew Lytvynov 2e77b75e96
ipn/ipnlocal: don't fail profile unmarshal due to attestation keys (#18335)
Soft-fail on initial unmarshal and try again, ignoring the
AttestationKey. This helps in cases where something about the
attestation key storage (usually a TPM) is messed up. The old key will
be lost, but at least the node can start again.

Updates #18302
Updates #15830

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
3 weeks ago
James Tucker 39a61888b8 ssh/tailssh: send audit messages on SSH login (Linux)
Send LOGIN audit messages to the kernel audit subsystem on Linux
when users successfully authenticate to Tailscale SSH. This provides
administrators with audit trail integration via auditd or journald,
recording details about both the Tailscale user (whois) and the
mapped local user account.

The implementation uses raw netlink sockets to send AUDIT_USER_LOGIN
messages to the kernel audit subsystem. It requires CAP_AUDIT_WRITE
capability, which is checked at runtime. If the capability is not
present, audit logging is silently skipped.

Audit messages are sent to the kernel (pid 0) and consumed by either
auditd (written to /var/log/audit/audit.log) or journald (available
via journalctl _TRANSPORT=audit), depending on system configuration.

Note: This may result in duplicate messages on a system where
auditd/journald audit logs are enabled and the system has and supports
`login -h`. Sadly Linux login code paths are still an inconsistent wild
west so we accept the potential duplication rather than trying to avoid
it.

Fixes #18332

Signed-off-by: James Tucker <james@tailscale.com>
3 weeks ago
Vince Liem b7081522e7
scripts/installer.sh: add ultramarine to supported OS list 3 weeks ago
Raj Singh d451cd54a7
cmd/derper: add --acme-email flag for GCP cert mode (#18278)
GCP Certificate Manager requires an email contact on ACME accounts.
Add --acme-email flag that is required for --certmode=gcp and
optional for --certmode=letsencrypt.

Fixes #18277

Signed-off-by: Raj Singh <raj@tailscale.com>
1 month ago
Nick Khyl 2917ea8d0e ipn/ipnauth, safesocket: defer named pipe client's token retrieval until ipnserver needs it
An error returned by net.Listener.Accept() causes the owning http.Server to shut down.
With the deprecation of net.Error.Temporary(), there's no way for the http.Server to test
whether the returned error is temporary / retryable or not (see golang/go#66252).

Because of that, errors returned by (*safesocket.winIOPipeListener).Accept() cause the LocalAPI
server (aka ipnserver.Server) to shut down, and tailscaled process to exit.

While this might be acceptable in the case of non-recoverable errors, such as programmer errors,
we shouldn't shut down the entire tailscaled process for client- or connection-specific errors,
such as when we couldn't obtain the client's access token because the client attempts to connect
at the Anonymous impersonation level. Instead, the LocalAPI server should gracefully handle
these errors by denying access and returning a 401 Unauthorized to the client.

In tailscale/tscert#15, we fixed a known bug where Caddy and other apps using tscert would attempt
to connect at the Anonymous impersonation level and fail. However, we should also fix this on the tailscaled
side to prevent a potential DoS, where a local app could deliberately open the Tailscale LocalAPI named pipe
at the Anonymous impersonation level and cause tailscaled to exit.

In this PR, we defer token retrieval until (*WindowsClientConn).Token() is called and propagate the returned token
or error via ipnauth.GetConnIdentity() to ipnserver, which handles it the same way as other ipnauth-related errors.

Fixes #18212
Fixes tailscale/tscert#13

Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 month ago
Alex Chan 9c3a420e15 cmd/tailscale/cli: document why there's no --force-reauth on login
Change-Id: Ied799fefbbb4612c7ba57b8369a418b7704eebf8
Updates #18273
Signed-off-by: Alex Chan <alexc@tailscale.com>
1 month ago
Alex Valiushko ee59470270
net/udprelay: remove tailscaled_peer_relay_endpoints_total (#18254)
This gauge will be reworked to include endpoint state in future.

Updates tailscale/corp#30820

Change-Id: I66f349d89422b46eec4ecbaf1a99ad656c7301f9

Signed-off-by: Alex Valiushko <alexvaliushko@tailscale.com>
1 month ago
Irbe Krumina 90b4358113
cmd/k8s-operator,ipn/ipnlocal: allow opting out of ACME order replace extension (#18252)
In dynamically changing environments where ACME account keys and certs
are stored separately, it can happen that the account key would get
deleted (and recreated) between issuances. If that is the case,
we currently fail renewals and the only way to recover is for users
to delete certs.
This adds a config knob to allow opting out of the replaces extension
and utilizes it in the Kubernetes operator where there are known
user workflows that could end up with this edge case.

Updates #18251

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Alex Valiushko c40f352103
net/udprelay: expose peer relay metrics (#18218)
Adding both user and client metrics for peer relay forwarded bytes and
packets, and the total endpoints gauge.

User metrics:
tailscaled_peer_relay_forwarded_packets_total{transport_in, transport_out}
tailscaled_peer_relay_forwarded_bytes_total{transport_in, transport_out}
tailscaled_peer_relay_endpoints_total{}

Where the transport labels can be of "udp4" or "udp6".

Client metrics:
udprelay_forwarded_(packets|bytes)_udp(4|6)_udp(4|6)
udprelay_endpoints

RELNOTE: Expose tailscaled metrics for peer relay.

Updates tailscale/corp#30820

Change-Id: I1a905d15bdc5ee84e28017e0b93210e2d9660259

Signed-off-by: Alex Valiushko <alexvaliushko@tailscale.com>
1 month ago
Tom Proctor bb3529fcd4
cmd/containerboot: support egress to Tailscale Service FQDNs (#17493)
Adds support for targeting FQDNs that are a Tailscale Service. Uses the
same method of searching for Services as the tailscale configure
kubeconfig command. This fixes using the tailscale.com/tailnet-fqdn
annotation for Kubernetes Service when the specified FQDN is a Tailscale
Service.

Fixes #16534

Change-Id: I422795de76dc83ae30e7e757bc4fbd8eec21cc64

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Signed-off-by: Becky Pauley <becky@tailscale.com>
1 month ago
Tom Proctor eed5e95e27 docs: use -x for cherry-picks
Updates #cleanup

Change-Id: I5222e23b716b342d7c6d113fc539d2021024348e
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 month ago
Irbe Krumina b73fb467e4
ipn/ipnlocal: log cert renewal failures (#18246)
Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
1 month ago
Brendan Creane e4847fa77b
go.toolchain.rev: update to Go 1.25.5 (#18123)
Updates #18122

Signed-off-by: Brendan Creane <bcreane@gmail.com>
1 month ago
Andrew Lytvynov ce7e1dea45
types/persist: omit Persist.AttestationKey based on IsZero (#18241)
IsZero is required by the interface, so we should use that before trying
to serialize the key.

Updates #35412

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
1 month ago
Tom Meadows b21cba0921
cmd/k8s-operator: fixes helm template for oauth secret volume mount (#18230)
Fixes #18228

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
1 month ago
Andrew Dunham 323604b76c net/dns/resolver: log source IP of forwarded queries
When the TS_DEBUG_DNS_FORWARD_SEND envknob is turned on, also log the
source IP:port of the query that tailscaled is forwarding.

Updates tailscale/corp#35374

Signed-off-by: Andrew Dunham <andrew@tailscale.com>
1 month ago
Jonathan Nobels 3e89068792
net/netmon, wgengine/userspace: purge ChangeDelta.Major and address TODOs (#17823)
updates tailscale/corp#33891

Addresses several older the TODO's in netmon.  This removes the 
Major flag precomputes the ChangeDelta state, rather than making
consumers of ChangeDeltas sort that out themselves.   We're also seeing
a lot of ChangeDelta's being flagged as "Major" when they are
not interesting, triggering rebinds in wgengine that are not needed.  This
cleans that up and adds a host of additional tests.

The dependencies are cleaned, notably removing dependency on netmon
itself for calculating what is interesting, and what is not.  This includes letting
individual platforms set a bespoke global "IsInterestingInterface"
function.  This is only used on Darwin.

RebindRequired now roughly follows how "Major" was historically
calculated but includes some additional checks for various
uninteresting events such as changes in interface addresses that
shouldn't trigger a rebind.  This significantly reduces thrashing (by
roughly half on Darwin clients which switching between nics).   The individual
values that we roll  into RebindRequired are also exposed so that
components consuming netmap.ChangeDelta can ask more
targeted questions.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
1 month ago
Will Norris 0fd1670a59 client/local: add method to set gauge metric to a value
The existing client metric methods only support incrementing (or
decrementing) a delta value.  This new method allows setting the metric
to a specific value.

Updates tailscale/corp#35327

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
stratself f174ecb6fd
words: 33 tails and 26 scales (#18213)
Updates #words

Signed-off-by: stratself <126093083+stratself@users.noreply.github.com>
1 month ago
Jordan Whited a663639bea net/udprelay: replace map+sync.Mutex with sync.Map for VNI lookup
This commit also introduces a sync.Mutex for guarding mutatable fields
on serverEndpoint, now that it is no longer guarded by the sync.Mutex
in Server.

These changes reduce lock contention and by effect increase aggregate
throughput under high flow count load. A benchmark on Linux with AWS
c8gn instances showed a ~30% increase in aggregate throughput (37Gb/s
vs 28Gb/s) for 12 tailscaled flows.

Updates tailscale/corp#35264

Signed-off-by: Jordan Whited <jordan@tailscale.com>
1 month ago
Will Norris 951d711054 client/systray: add missing deferred unlock for httpCache mutex
Updates #cleanup

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
1 month ago
Tom Proctor d0d993f5d6 .github,cmd/cigocacher: add flags --version --stats --cigocached-host
Add flags:

* --cigocached-host to support alternative host resolution in other
  environments, like the corp repo.
* --stats to reduce the amount of bash script we need.
* --version to support a caching tool/cigocacher script that will
  download from GitHub releases.

Updates tailscale/corp#10808

Change-Id: Ib2447bc5f79058669a70f2c49cef6aedd7afc049
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
1 month ago
Tom Meadows d7a5624841
cmd/k8s-operator: fix statefulset template yaml indentation (#18194)
Fixes #17000

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
1 month ago
Irbe Krumina cb5fa35f57
.github/workfkows,Dockerfile,Dockerfile.base: add a test for base image (#18180)
Test that the base image builds and has the right iptables binary
linked.

Updates #17854

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
James 'zofrex' Sanderson 3ef9787379
tsweb: add Unwrap to loggingResponseWriter for ResponseController (#18195)
The new http.ResponseController type added in Go 1.20:
https://go.dev/doc/go1.20#http_responsecontroller requires ResponseWriters
that are wrapping the original passed to ServeHTTP to implement an Unwrap
method: https://pkg.go.dev/net/http#NewResponseController

With this in place, it is possible to call methods such as Flush and
SetReadDeadline on a loggingResponseWriter without needing to implement them
there ourselves.

Updates tailscale/corp#34763
Updates tailscale/corp#34813

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2 months ago
Raj Singh 65182f2119
ipn/ipnlocal: add ProxyProtocol support to VIP service TCP handler (#18175)
tcpHandlerForVIPService was missing ProxyProtocol support that
tcpHandlerForServe already had. Extract the shared logic into
forwardTCPWithProxyProtocol helper and use it in both handlers.

Fixes #18172

Signed-off-by: Raj Singh <raj@tailscale.com>
2 months ago
Joe Tsai 9613b4eecc
logtail: add metrics (#18184)
Add metrics about logtail uploading and underlying buffer.
Add metrics to the in-memory buffer implementation.

Updates tailscale/corp#21363

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 months ago
Brad Fitzpatrick 0df4631308 ipn/ipnlocal: avoid ResetAndStop panic
Updates #18187

Change-Id: If7375efb7df0452a5e85b742fc4c4eecbbd62717
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Simon Law 6ace3995f0
portlist: skip tests on Linux 6.14.x with /proc/net/tcp bug (#18185)
PR #18033 skipped tests for the versions of Linux 6.6 and 6.12 that
had a regression in /proc/net/tcp that causes seek operations to fail
with “illegal seek”.

This PR skips tests for Linux 6.14.0, which is the default Ubuntu
kernel, that also contains this regression.

Updates #16966

Signed-off-by: Simon Law <sfllaw@tailscale.com>
2 months ago
Joe Tsai 6428ba01ef
logtail/filch: rewrite the package (#18143)
The filch implementation is fairly broken:

* When Filch.cur exceeds MaxFileSize, it calls moveContents
to copy the entirety of cur into alt (while holding the write lock).
By nature, this is the movement of a lot of data in a hot path,
meaning that all log calls will be globally blocked!
It also means that log uploads will be blocked during the move.

* The implementation of moveContents is buggy in that
it copies data from cur into the start of alt,
but fails to truncate alt to the number of bytes copied.
Consequently, there are unrelated lines near the end,
leading to out-of-order lines when being read back.

* Data filched via stderr do not directly respect MaxFileSize,
which is only checked every 100 Filch.Write calls.
This means that it is possible that the file grows far beyond
the specified max file size before moveContents is called.

* If both log files have data when New is called,
it also copies the entirety of cur into alt.
This can block the startup of a process copying lots of data
before the process can do any useful work.

* TryReadLine is implemented using bufio.Scanner.
Unfortunately, it will choke on any lines longer than
bufio.MaxScanTokenSize, rather than gracefully skip over them.

The re-implementation avoids a lot of these problems
by fundamentally eliminating the need for moveContent.
We enforce MaxFileSize by simply rotating the log files
whenever the current file exceeds MaxFileSize/2.
This is a constant-time operation regardless of file size.

To more gracefully handle lines longer than bufio.MaxScanTokenSize,
we skip over these lines (without growing the read buffer)
and report an error. This allows subsequent lines to be read.

In order to improve debugging, we add a lot of metrics.

Note that the the mechanism of dup2 with stderr
is inherently racy with a the two file approach.
The order of operations during a rotation is carefully chosen
to reduce the race window to be as short as possible.
Thus, this is slightly less racy than before.

Updates tailscale/corp#21363

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 months ago
Claus Lensbøl c870d3811d
net/{packet,tstun},wgengine: update disco key when receiving via TSMP (#18158)
When receiving a TSMPDiscoAdvertisement from peer, update the discokey
for said peer.

Some parts taken from: https://github.com/tailscale/tailscale/pull/18073/

Updates #12639

Co-authored-by: James Tucker <james@tailscale.com>
2 months ago
Irbe Krumina 723b9af21a
Dockerfile,Dockerfile.base: link iptables to legacy binary (#18177)
Re-instate the linking of iptables installed in Tailscale container
to the legacy iptables version. In environments where the legacy
iptables is not needed, we should be able to run nftables instead,
but this will ensure that Tailscale keeps working in environments
that don't support nftables, such as some Synology NAS hosts.

Updates #17854

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Raj Singh 8eda947530
cmd/derper: add GCP Certificate Manager support (#18161)
Add --certmode=gcp for using Google Cloud Certificate Manager's
public CA instead of Let's Encrypt. GCP requires External Account
Binding (EAB) credentials for ACME registration, so this adds
--acme-eab-kid and --acme-eab-key flags.

The EAB key accepts both base64url and standard base64 encoding
to support both ACME spec format and gcloud output.

Fixes tailscale/corp#34881

Signed-off-by: Raj Singh <raj@tailscale.com>
Co-authored-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Claus Lensbøl 1dfdee8521
net/dns: retrample resolve.conf when another process has trampled it (#18069)
When using the resolve.conf file for setting DNS, it is possible that
some other services will trample the file and overwrite our set DNS
server. Experiments has shown this to be a racy error depending on how
quickly processes start.

Make an attempt to trample back the file a limited number of times if
the file is changed.

Updates #16635

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
2 months ago
Jordan Whited a9b37c510c net/udprelay: re-use mono.Time in control packet handling
Fixes tailscale/corp#35100

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2 months ago
Simar 363d882306 net/udprelay: use `mono.Time` instead of `time.Time`
Fixes: https://github.com/tailscale/tailscale/issues/18064

Signed-off-by: Simar <simar@linux.com>
2 months ago
Fran Bull 076d5c7214 appc,feature: add the start of new conn25 app connector
When peers request an IP address mapping to be stored, the connector
stores it in memory.

Fixes tailscale/corp#34251
Signed-off-by: Fran Bull <fran@tailscale.com>
2 months ago
Tom Proctor dd1bb8ee42 .github: add cigocacher release workflow
To save rebuilding cigocacher on each CI job, build it on-demand, and
publish a release similar to how we publish releases for tool/go to
consume. Once the first release is done, we can add a new
tool/cigocacher script that pins to a specific release for each branch
to download.

Updates tailscale/corp#10808

Change-Id: I7694b2c2240020ba2335eb467522cdd029469b6c
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2 months ago
Alex Chan 378ee20b9a cmd/tailscale/cli: stabilise the output of `tailscale lock status --json`
This patch stabilises the JSON output, and improves it in the following
ways:

* The AUM hash in Head uses the base32-encoded form of an AUM hash,
  consistent with how it's presented elsewhere
* TrustedKeys are the same format as the keys as `tailnet lock log --json`
* SigKind, Pubkey and KeyID are all presented consistently with other
  JSON output in NodeKeySignature
* FilteredPeers don't have a NodeKeySignature, because it will always
  be empty

For reference, here's the JSON output from the CLI prior to this change:

```json
{
  "Enabled": true,
  "Head": [
    196,
    69,
    63,
    243,
    213,
    133,
    123,
    46,
    183,
    203,
    143,
    34,
    184,
    85,
    80,
    1,
    221,
    92,
    49,
    213,
    93,
    106,
    5,
    206,
    176,
    250,
    58,
    165,
    155,
    136,
    11,
    13
  ],
  "PublicKey": "nlpub:0f99af5c02216193963ce9304bb4ca418846eddebe237f37a6de1c59097ed0b8",
  "NodeKey": "nodekey:8abfe98b38151748919f6e346ad16436201c3ecd453b01e9d6d3a38e1826000d",
  "NodeKeySigned": true,
  "NodeKeySignature": {
    "SigKind": 1,
    "Pubkey": "bnCKv+mLOBUXSJGfbjRq0WQ2IBw+zUU7AenW06OOGCYADQ==",
    "KeyID": "D5mvXAIhYZOWPOkwS7TKQYhG7d6+I383pt4cWQl+0Lg=",
    "Signature": "4DPW4v6MyLLwQ8AMDm27BVDGABjeC9gg1EfqRdKgzVXi/mJDwY9PTAoX0+0WTRs5SUksWjY0u1CLxq5xgjFGBA==",
    "Nested": null,
    "WrappingPubkey": "D5mvXAIhYZOWPOkwS7TKQYhG7d6+I383pt4cWQl+0Lg="
  },
  "TrustedKeys": [
    {
      "Key": "nlpub:0f99af5c02216193963ce9304bb4ca418846eddebe237f37a6de1c59097ed0b8",
      "Metadata": null,
      "Votes": 1
    },
    {
      "Key": "nlpub:de2254c040e728140d92bc967d51284e9daea103a28a97a215694c5bda2128b8",
      "Metadata": null,
      "Votes": 1
    }
  ],
  "VisiblePeers": [
    {
      "Name": "signing2.taila62b.unknown.c.ts.net.",
      "ID": 7525920332164264,
      "StableID": "nRX6TbAWm121DEVEL",
      "TailscaleIPs": [
        "100.110.67.20",
        "fd7a:115c:a1e0::9c01:4314"
      ],
      "NodeKey": "nodekey:10bf4a5c168051d700a29123cd81568377849da458abef4b328794ca9cae4313",
      "NodeKeySignature": {
        "SigKind": 1,
        "Pubkey": "bnAQv0pcFoBR1wCikSPNgVaDd4SdpFir70syh5TKnK5DEw==",
        "KeyID": "D5mvXAIhYZOWPOkwS7TKQYhG7d6+I383pt4cWQl+0Lg=",
        "Signature": "h9fhwHiNdkTqOGVQNdW6AVFoio6MFaFobPiK9ydywgmtYxcExJ38b76Tabdc56aNLxf8IfCaRw2VYPcQG2J/AA==",
        "Nested": null,
        "WrappingPubkey": "3iJUwEDnKBQNkryWfVEoTp2uoQOiipeiFWlMW9ohKLg="
      }
    }
  ],
  "FilteredPeers": [
    {
      "Name": "node3.taila62b.unknown.c.ts.net.",
      "ID": 5200614049042386,
      "StableID": "n3jAr7KNch11DEVEL",
      "TailscaleIPs": [
        "100.95.29.124",
        "fd7a:115c:a1e0::f901:1d7c"
      ],
      "NodeKey": "nodekey:454d2c8602c10574c5ec3a6790f159714802012b7b8bb8d2ab47d637f9df1d7b",
      "NodeKeySignature": {
        "SigKind": 0,
        "Pubkey": null,
        "KeyID": null,
        "Signature": null,
        "Nested": null,
        "WrappingPubkey": null
      }
    }
  ],
  "StateID": 16885615198276932820
}
```

Updates https://github.com/tailscale/corp/issues/22355
Updates https://github.com/tailscale/tailscale/issues/17619

Signed-off-by: Alex Chan <alexc@tailscale.com>

Change-Id: I65b58ff4520033e6b70fc3b1ba7fc91c1f70a960
2 months ago
Nick Khyl da0ea8ef3e Revert "ipn/ipnlocal: shut down old control client synchronously on reset"
It appears (*controlclient.Auto).Shutdown() can still deadlock when called with b.mu held, and therefore the changes in #18127 are unsafe.

This reverts #18127 until we figure out what causes it.

This reverts commit d199ecac80.

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 months ago
Erisa A c7b10cb39f
scripts/installer.sh: add SteamOS handling (#18159)
Fixes #12943

Signed-off-by: Erisa A <erisa@tailscale.com>
2 months ago
Alex Chan 7d3097d3b5 tka: add some more tests for Bootstrap()
This improves our test coverage of the Bootstrap() method, especially
around catching AUMs that shouldn't pass validation.

Updates #cleanup

Change-Id: Idc61fcbc6daaa98c36d20ec61e45ce48771b85de
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 months ago
Irbe Krumina 2a0ddb7897
cmd/k8s-operator: warn if users attempt to expose a headless Service (#18140)
Previously, if users attempted to expose a headless Service to tailnet,
this just silently did not work.
This PR makes the operator throw a warning event + update Service's
status with an error message.

Updates #18139

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Irbe Krumina d5c893195b
cmd/k8s-operator: don't log errors on not found objects. (#18142)
The event queue gets deleted events, which means that sometimes
the object that should be reconciled no longer exists.
Don't log user facing errors if that is the case.

Updates #18141

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2 months ago
Claus Lensbøl d349370e55
client/systray: change systray to start after graphical.target (#18138)
The service was starting after systemd itself, and while this
surprisingly worked for some situations, it broke for others.

Change it to start after a GUI has been initialized.

Updates #17656

Signed-off-by: Claus Lensbøl <claus@tailscale.com>
2 months ago
James 'zofrex' Sanderson cf40cf5ccb
ipn/ipnlocal: add peer API endpoints to Hostinfo on initial client creation (#17851)
Previously we only set this when it updated, which was fine for the first
call to Start(), but after that point future updates would be skipped if
nothing had changed. If Start() was called again, it would wipe the peer API
endpoints and they wouldn't get added back again, breaking exit nodes (and
anything else requiring peer API to be advertised).

Updates tailscale/corp#27173

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2 months ago
Peter A. f4d34f38be cmd/tailscale,ipn: add Unix socket support for serve
Based on PR #16700 by @lox, adapted to current codebase.

Adds support for proxying HTTP requests to Unix domain sockets via
tailscale serve unix:/path/to/socket, enabling exposure of services
like Docker, containerd, PHP-FPM over Tailscale without TCP bridging.

The implementation includes reasonable protections against exposure of
tailscaled's own socket.

Adaptations from original PR:
- Use net.Dialer.DialContext instead of net.Dial for context propagation
- Use http.Transport with Protocols API (current h2c approach, not http2.Transport)
- Resolve conflicts with hasScheme variable in ExpandProxyTargetValue

Updates #9771

Signed-off-by: Peter A. <ink.splatters@pm.me>
Co-authored-by: Lachlan Donald <lachlan@ljd.cc>
2 months ago
Nick Khyl 557457f3c2
ipn/ipnlocal: fix LocalBackend deadlock when packet arrives during profile switch (#18126)
If a packet arrives while WireGuard is being reconfigured with b.mu held, such as during a profile switch,
calling back into (*LocalBackend).GetPeerAPIPort from (*Wrapper).filterPacketInboundFromWireGuard
may deadlock when it tries to acquire b.mu.

This occurs because a peer cannot be removed while an inbound packet is being processed.
The reconfig and profile switch wait for (*Peer).RoutineSequentialReceiver to return, but it never finishes
because GetPeerAPIPort needs b.mu, which the waiting goroutine already holds.

In this PR, we make peerAPIPorts a new syncs.AtomicValue field that is written with b.mu held
but can be read by GetPeerAPIPort without holding the mutex, which fixes the deadlock.

There might be other long-term ways to address the issue, such as moving peer API listeners
from LocalBackend to nodeBackend so they can be accessed without holding b.mu,
but these changes are too large and risky at this stage in the v1.92 release cycle.

Updates #18124

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 months ago
Nick Khyl d199ecac80 ipn/ipnlocal: shut down old control client synchronously on reset
Previously, callers of (*LocalBackend).resetControlClientLocked were supposed
to call Shutdown on the returned controlclient.Client after releasing b.mu.
In #17804, we started calling Shutdown while holding b.mu, which caused
deadlocks during profile switches due to the (*ExecQueue).RunSync implementation.

We first patched this in #18053 by calling Shutdown in a new goroutine,
which avoided the deadlocks but made TestStateMachine flaky because
the shutdown order was no longer guaranteed.

In #18070, we updated (*ExecQueue).RunSync to allow shutting down
the queue without waiting for RunSync to return. With that change,
shutting down the control client while holding b.mu became safe.

Therefore, this PR updates (*LocalBackend).resetControlClientLocked
to shut down the old client synchronously during the reset, instead of
returning it and shifting that responsibility to the callers.

This fixes the flaky tests and simplifies the code.

Fixes #18052

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2 months ago
Andrew Lytvynov 7bc25f77f4
go.toolchain.rev: update to Go 1.25.5 (#18123)
Updates #18122

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2 months ago
Jordan Whited 6a44990b09 net/udprelay: bind multiple sockets per af on Linux
This commit uses SO_REUSEPORT (when supported) to bind multiple sockets
per address family. Increasing the number of sockets can increase
aggregate throughput when serving many peer relay client flows.
Benchmarks show 3x improvement in max aggregate bitrate in some
environments.

Updates tailscale/corp#34745

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2 months ago
Alex Chan e33f6aa3ba go.mod: bump the version of setec
Updates https://github.com/tailscale/corp/issues/34813

Change-Id: I926f1bad5bf143d82ddb36f51f70deb24fa11e71
Signed-off-by: Alex Chan <alexc@tailscale.com>
2 months ago
Tom Proctor f8cd07fb8a .github: make cigocacher script more robust
We got a flake in https://github.com/tailscale/tailscale/actions/runs/19867229792/job/56933249360
but it's not obvious to me where it failed. Make it more robust and
print out more useful error messages for next time.

Updates tailscale/corp#10808

Change-Id: I9ca08ea1103b9ad968c9cc0c42a493981ea62435
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2 months ago
Brad Fitzpatrick b8c58ca7c1 wgengine: fix TSMP/ICMP callback leak
Fixes #18112

Change-Id: I85d5c482b01673799d51faeb6cb0579903597502
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Gesa Stupperich 536188c1b5 tsnet: enable node registration via federated identity
Updates: tailscale.com/corp#34148

Signed-off-by: Gesa Stupperich <gesa@tailscale.com>
2 months ago
Joe Tsai 957a443b23
cmd/netlogfmt: allow empty --resolve-addrs flag (#18103)
Updates tailscale/corp#33352

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2 months ago
Raj Singh bd5c50909f
scripts/installer: add TAILSCALE_VERSION environment variable (#18014)
Add support for pinning specific Tailscale versions during installation
via the TAILSCALE_VERSION environment variable.

Example usage:
  curl -fsSL https://tailscale.com/install.sh | TAILSCALE_VERSION=1.88.4 sh

Fixes #17776

Signed-off-by: Raj Singh <raj@tailscale.com>
2 months ago
Tom Proctor 22a815b6d2 tool: bump binaryen wasm optimiser version 111 -> 125
111 is 3 years old, and there have been a lot of speed improvements
since then. We run wasm-opt twice as part of the CI wasm job, and it
currently takes about 3 minutes each time. With 125, it takes ~40
seconds, a 4.5x speed-up.

Updates #cleanup

Change-Id: I671ae6cefa3997a23cdcab6871896b6b03e83a4f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2 months ago
License Updater 8976b34cb8 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2 months ago
Naasir 77dcdc223e cleanup: fix typos across multiple files
Does not affect code.

Updates #cleanup

Signed-off-by: Naasir <yoursdeveloper@protonmail.com>
2 months ago
Tom Proctor ece6e27f39 .github,cmd/cigocacher: use cigocacher for windows
Implements a new disk put function for cigocacher that does not cause
locking issues on Windows when there are multiple processes reading and
writing the same files concurrently. Integrates cigocacher into test.yml
for Windows where we are running on larger runners that support
connecting to private Azure vnet resources where cigocached is hosted.

Updates tailscale/corp#10808

Change-Id: I0d0e9b670e49e0f9abf01ff3d605cd660dd85ebb
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2 months ago
Tom Proctor 97f1fd6d48 .github: only save cache on main
The cache artifacts from a full run of test.yml are 14GB. Only save
artifacts from the main branch to ensure we don't thrash too much. Most
branches should get decent performance with a hit from recent main.

Fixes tailscale/corp#34739

Change-Id: Ia83269d878e4781e3ddf33f1db2f21d06ea2130f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2 months ago
Shaikh Naasir 37b4dd047f
k8s-operator: Fix typos in egress-pod-readiness.go
Updates #cleanup

Signed-off-by: Alex Chan <alexc@tailscale.com>
2 months ago
Alex Chan bd12d8f12f cmd/tailscale/cli: soften the warning on `--force-reauth` for seamless
Thanks to seamless key renewal, you can now do a force-reauth without
losing your connection in all circumstances. We softened the interactive
warning (see #17262) so let's soften the help text as well.

Updates https://github.com/tailscale/corp/issues/32429

Signed-off-by: Alex Chan <alexc@tailscale.com>
2 months ago
Anton Tolchanov 34dff57137 feature/posture: log method and full URL for posture identity requests
Updates tailscale/corp#34676

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2 months ago
Fernando Serboncini f36eb81e61
cmd/k8s-operator fix populateTLSSecret on tests (#18088)
The call for populateTLSSecret was broken between PRs.

Updates #cleanup

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
2 months ago
Fernando Serboncini 7c5c02b77a
cmd/k8s-operator: add support for taiscale.com/http-redirect (#17596)
* cmd/k8s-operator: add support for taiscale.com/http-redirect

The k8s-operator now supports a tailscale.com/http-redirect annotation
on Ingress resources. When enabled, this automatically creates port 80
handlers that automatically redirect to the equivalent HTTPS location.

Fixes #11252

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* Fix for permanent redirect

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* lint

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* warn for redirect+endpoint

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

* tests

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>

---------

Signed-off-by: Fernando Serboncini <fserb@tailscale.com>
2 months ago
Mario Minardi 411cee0dc9 .github/workflows: only run golang ci lint when go files have changed
Restrict running the golangci-lint workflow to when the workflow file
itself or a .go file, go.mod, or go.sum have actually been modified.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 months ago
dependabot[bot] b40272e767 build(deps): bump braces from 3.0.2 to 3.0.3 in /client/web
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-version: 3.0.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2 months ago
dependabot[bot] 22bdf34a00 build(deps): bump cross-spawn from 7.0.3 to 7.0.6 in /client/web
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from 7.0.3 to 7.0.6.
- [Changelog](https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.6)

---
updated-dependencies:
- dependency-name: cross-spawn
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2 months ago
dependabot[bot] c0c0d45114 build(deps-dev): bump vitest from 1.3.1 to 1.6.1 in /client/web
Bumps [vitest](https://github.com/vitest-dev/vitest/tree/HEAD/packages/vitest) from 1.3.1 to 1.6.1.
- [Release notes](https://github.com/vitest-dev/vitest/releases)
- [Commits](https://github.com/vitest-dev/vitest/commits/v1.6.1/packages/vitest)

---
updated-dependencies:
- dependency-name: vitest
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2 months ago
dependabot[bot] 3e2476ec13 build(deps-dev): bump vite from 5.1.7 to 5.4.21 in /client/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.1.7 to 5.4.21.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.21/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.21/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.21
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2 months ago
dependabot[bot] 9500689bc1 build(deps): bump js-yaml from 4.1.0 to 4.1.1 in /client/web
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 4.1.0 to 4.1.1.
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 4.1.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2 months ago
Mario Minardi 9cc07bf9c0 .github/workflows: skip draft PRs for request review workflows
Skip the "request review" workflows for PRs that are in draft to reduce
noise / skip adding reviewers to PRs that are intentionally marked as
not ready to review.

Updates #cleanup

Signed-off-by: Mario Minardi <mario@tailscale.com>
2 months ago
Brad Fitzpatrick 74ed589042 syncs: add means of declare locking assumptions for debug mode validation
Updates #17852

Change-Id: I42a64a990dcc8f708fa23a516a40731a19967aba
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2 months ago
Jonathan Nobels 3f9f0ed93c
VERSION.txt: this is v1.93.0 (#18074)
Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2 months ago
James Tucker 5ee0c6bf1d derp/derpserver: add a unique sender cardinality estimate
Adds an observation point that may identify potentially abusive traffic
patterns at outlier values.

Updates tailscale/corp#24681

Signed-off-by: James Tucker <james@tailscale.com>
2 months ago

@ -0,0 +1,54 @@
#!/usr/bin/env bash
#
# This script sets up cigocacher, but should never fail the build if unsuccessful.
# It expects to run on a GitHub-hosted runner, and connects to cigocached over a
# private Azure network that is configured at the runner group level in GitHub.
#
# Usage: ./action.sh
# Inputs:
# URL: The cigocached server URL.
# HOST: The cigocached server host to dial.
# Outputs:
# success: Whether cigocacher was set up successfully.
set -euo pipefail
if [ -z "${GITHUB_ACTIONS:-}" ]; then
echo "This script is intended to run within GitHub Actions"
exit 1
fi
if [ -z "${URL:-}" ]; then
echo "No cigocached URL is set, skipping cigocacher setup"
exit 0
fi
GOPATH=$(command -v go || true)
if [ -z "${GOPATH}" ]; then
if [ ! -f "tool/go" ]; then
echo "Go not available, unable to proceed"
exit 1
fi
GOPATH="./tool/go"
fi
BIN_PATH="${RUNNER_TEMP:-/tmp}/cigocacher$(${GOPATH} env GOEXE)"
if [ -d "cmd/cigocacher" ]; then
echo "cmd/cigocacher found locally, building from local source"
"${GOPATH}" build -o "${BIN_PATH}" ./cmd/cigocacher
else
echo "cmd/cigocacher not found locally, fetching from tailscale.com/cmd/cigocacher"
"${GOPATH}" build -o "${BIN_PATH}" tailscale.com/cmd/cigocacher
fi
CIGOCACHER_TOKEN="$("${BIN_PATH}" --auth --cigocached-url "${URL}" --cigocached-host "${HOST}" )"
if [ -z "${CIGOCACHER_TOKEN:-}" ]; then
echo "Failed to fetch cigocacher token, skipping cigocacher setup"
exit 0
fi
echo "Fetched cigocacher token successfully"
echo "::add-mask::${CIGOCACHER_TOKEN}"
echo "GOCACHEPROG=${BIN_PATH} --cache-dir ${CACHE_DIR} --cigocached-url ${URL} --cigocached-host ${HOST} --token ${CIGOCACHER_TOKEN}" >> "${GITHUB_ENV}"
echo "success=true" >> "${GITHUB_OUTPUT}"

@ -0,0 +1,35 @@
name: go-cache
description: Set up build to use cigocacher
inputs:
cigocached-url:
description: URL of the cigocached server
required: true
cigocached-host:
description: Host to dial for the cigocached server
required: true
checkout-path:
description: Path to cloned repository
required: true
cache-dir:
description: Directory to use for caching
required: true
outputs:
success:
description: Whether cigocacher was set up successfully
value: ${{ steps.setup.outputs.success }}
runs:
using: composite
steps:
- name: Setup cigocacher
id: setup
shell: bash
env:
URL: ${{ inputs.cigocached-url }}
HOST: ${{ inputs.cigocached-host }}
CACHE_DIR: ${{ inputs.cache-dir }}
working-directory: ${{ inputs.checkout-path }}
# https://github.com/orgs/community/discussions/25910
run: $GITHUB_ACTION_PATH/action.sh

@ -18,7 +18,7 @@ jobs:
runs-on: [ ubuntu-latest ]
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Build checklocks
run: ./tool/go build -o /tmp/checklocks gvisor.dev/gvisor/tools/checklocks/cmd/checklocks

@ -0,0 +1,73 @@
name: Build cigocacher
on:
# Released on-demand. The commit will be used as part of the tag, so generally
# prefer to release from main where the commit is stable in linear history.
workflow_dispatch:
jobs:
build:
strategy:
matrix:
GOOS: ["linux", "darwin", "windows"]
GOARCH: ["amd64", "arm64"]
runs-on: ubuntu-24.04
env:
GOOS: "${{ matrix.GOOS }}"
GOARCH: "${{ matrix.GOARCH }}"
CGO_ENABLED: "0"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Build
run: |
OUT="cigocacher$(./tool/go env GOEXE)"
./tool/go build -o "${OUT}" ./cmd/cigocacher/
tar -zcf cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}.tar.gz "${OUT}"
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}
path: cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}.tar.gz
release:
runs-on: ubuntu-24.04
needs: build
permissions:
contents: write
steps:
- name: Download all artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
pattern: 'cigocacher-*'
merge-multiple: true
# This step is a simplified version of actions/create-release and
# actions/upload-release-asset, which are archived and unmaintained.
- name: Create release
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const fs = require('fs');
const path = require('path');
const { data: release } = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: `cmd/cigocacher/${{ github.sha }}`,
name: `cigocacher-${{ github.sha }}`,
draft: false,
prerelease: true,
target_commitish: `${{ github.sha }}`
});
const files = fs.readdirSync('.').filter(f => f.endsWith('.tar.gz'));
for (const file of files) {
await github.rest.repos.uploadReleaseAsset({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: release.id,
name: file,
data: fs.readFileSync(file)
});
console.log(`Uploaded ${file}`);
}

@ -45,7 +45,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
# Install a more recent Go that understands modern go.mod content.
- name: Install Go

@ -0,0 +1,29 @@
name: "Validate Docker base image"
on:
workflow_dispatch:
pull_request:
paths:
- "Dockerfile.base"
- ".github/workflows/docker-base.yml"
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: "build and test"
run: |
set -e
IMG="test-base:$(head -c 8 /dev/urandom | xxd -p)"
docker build -t "$IMG" -f Dockerfile.base .
iptables_version=$(docker run --rm "$IMG" iptables --version)
if [[ "$iptables_version" != *"(legacy)"* ]]; then
echo "ERROR: Docker base image should contain legacy iptables; found ${iptables_version}"
exit 1
fi
ip6tables_version=$(docker run --rm "$IMG" ip6tables --version)
if [[ "$ip6tables_version" != *"(legacy)"* ]]; then
echo "ERROR: Docker base image should contain legacy ip6tables; found ${ip6tables_version}"
exit 1
fi

@ -8,6 +8,6 @@ jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: "Build Docker image"
run: docker build .

@ -17,7 +17,7 @@ jobs:
id-token: "write"
contents: "read"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: "${{ (inputs.tag != null) && format('refs/tags/{0}', inputs.tag) || '' }}"
- uses: DeterminateSystems/nix-installer-action@786fff0690178f1234e4e1fe9b536e94f5433196 # v20

@ -2,7 +2,11 @@ name: golangci-lint
on:
# For now, only lint pull requests, not the main branches.
pull_request:
paths:
- ".github/workflows/golangci-lint.yml"
- "**.go"
- "go.mod"
- "go.sum"
# TODO(andrew): enable for main branch after an initial waiting period.
#push:
# branches:
@ -23,17 +27,21 @@ jobs:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version-file: go.mod
cache: false
cache: true
- name: golangci-lint
uses: golangci/golangci-lint-action@1481404843c368bc19ca9406f87d6e0fc97bdcfd # v7.0.0
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
version: v2.4.0
# Show only new issues if it's a pull request.
only-new-issues: true
# Loading packages with a cold cache takes a while:
args: --timeout=10m

@ -14,7 +14,7 @@ jobs:
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install govulncheck
run: ./tool/go install golang.org/x/vuln/cmd/govulncheck@latest

@ -58,6 +58,14 @@ jobs:
# Check a few images with wget rather than curl.
- { image: "debian:oldstable-slim", deps: "wget" }
- { image: "debian:sid-slim", deps: "wget" }
- { image: "debian:stable-slim", deps: "curl" }
- { image: "ubuntu:24.04", deps: "curl" }
- { image: "fedora:latest", deps: "curl" }
# Test TAILSCALE_VERSION pinning on a subset of distros.
# Skip Alpine as community repos don't reliably keep old versions.
- { image: "debian:stable-slim", deps: "curl", version: "1.80.0" }
- { image: "ubuntu:24.04", deps: "curl", version: "1.80.0" }
- { image: "fedora:latest", deps: "curl", version: "1.80.0" }
runs-on: ubuntu-latest
container:
image: ${{ matrix.image }}
@ -91,15 +99,21 @@ jobs:
contains(matrix.image, 'parrotsec') ||
contains(matrix.image, 'kalilinux')
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: run installer
run: scripts/installer.sh
env:
TAILSCALE_VERSION: ${{ matrix.version }}
# Package installation can fail in docker because systemd is not running
# as PID 1, so ignore errors at this step. The real check is the
# `tailscale --version` command below.
continue-on-error: true
- name: check tailscale version
run: tailscale --version
run: |
tailscale --version
if [ -n "${{ matrix.version }}" ]; then
tailscale --version | grep -q "^${{ matrix.version }}" || { echo "Version mismatch!"; exit 1; }
fi
notify-slack:
needs: test
runs-on: ubuntu-latest

@ -17,7 +17,7 @@ jobs:
runs-on: [ ubuntu-latest ]
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Build and lint Helm chart
run: |
eval `./tool/go run ./cmd/mkversion`

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install qemu
run: |
sudo rm /var/lib/man-db/auto-update

@ -22,7 +22,7 @@ jobs:
name: pin-github-actions
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: pin
run: make pin-github-actions
- name: check for changed workflow files

@ -2,6 +2,7 @@ name: request-dataplane-review
on:
pull_request:
types: [ opened, synchronize, reopened, ready_for_review ]
paths:
- ".github/workflows/request-dataplane-review.yml"
- "**/*derp*"
@ -10,11 +11,12 @@ on:
jobs:
request-dataplane-review:
if: github.event.pull_request.draft == false
name: Request Dataplane Review
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get access token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2.0.6
id: generate-token

@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Run SSH integration tests
run: |
make sshintegrationtest

@ -48,7 +48,7 @@ jobs:
cache-key: ${{ steps.hash.outputs.key }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Compute cache key from go.{mod,sum}
@ -88,7 +88,7 @@ jobs:
- shard: '4/4'
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -126,7 +126,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -136,21 +136,20 @@ jobs:
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-
- name: build all
if: matrix.buildflags == '' # skip on race builder
working-directory: src
@ -206,12 +205,26 @@ jobs:
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
windows:
# windows-8vpu is a 2022 GitHub-managed runner in our
# org with 8 cores and 32 GB of RAM:
# https://github.com/organizations/tailscale/settings/actions/github-hosted-runners/1
runs-on: windows-8vcpu
permissions:
id-token: write # This is required for requesting the GitHub action identity JWT that can auth to cigocached
contents: read # This is required for actions/checkout
# ci-windows-github-1 is a 2022 GitHub-managed runner in our org with 8 cores
# and 32 GB of RAM. It is connected to a private Azure VNet that hosts cigocached.
# https://github.com/organizations/tailscale/settings/actions/github-hosted-runners/5
runs-on: ci-windows-github-1
needs: gomod-cache
name: Windows (${{ matrix.name || matrix.shard}})
strategy:
@ -220,54 +233,40 @@ jobs:
include:
- key: "win-bench"
name: "benchmarks"
- key: "win-tool-go"
name: "./tool/go"
- key: "win-shard-1-2"
shard: "1/2"
- key: "win-shard-2-2"
shard: "2/2"
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
path: ${{ github.workspace }}/src
- name: Install Go
if: matrix.key != 'win-tool-go'
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version-file: src/go.mod
go-version-file: ${{ github.workspace }}/src/go.mod
cache: false
- name: Restore Go module cache
if: matrix.key != 'win-tool-go'
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache
if: matrix.key != 'win-tool-go'
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
- name: Set up cigocacher
id: cigocacher-setup
uses: ./src/.github/actions/go-cache
with:
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ matrix.key }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ matrix.key }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ matrix.key }}-go-2-
- name: test-tool-go
if: matrix.key == 'win-tool-go'
working-directory: src
run: ./tool/go version
checkout-path: ${{ github.workspace }}/src
cache-dir: ${{ github.workspace }}/cigocacher
cigocached-url: ${{ vars.CIGOCACHED_AZURE_URL }}
cigocached-host: ${{ vars.CIGOCACHED_AZURE_HOST }}
- name: test
if: matrix.key != 'win-bench' && matrix.key != 'win-tool-go' # skip on bench builder
if: matrix.key != 'win-bench' # skip on bench builder
working-directory: src
run: go run ./cmd/testwrapper sharded:${{ matrix.shard }}
@ -279,12 +278,26 @@ jobs:
# the equals signs cause great confusion.
run: go test ./... -bench . -benchtime 1x -run "^$"
- name: Tidy cache
if: matrix.key != 'win-tool-go'
working-directory: src
shell: bash
- name: Print stats
shell: pwsh
if: steps.cigocacher-setup.outputs.success == 'true'
env:
GOCACHEPROG: ${{ env.GOCACHEPROG }}
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
Invoke-Expression "$env:GOCACHEPROG --stats" | jq .
win-tool-go:
runs-on: windows-latest
needs: gomod-cache
name: Windows (win-tool-go)
steps:
- name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: test-tool-go
working-directory: src
run: ./tool/go version
privileged:
needs: gomod-cache
@ -294,7 +307,7 @@ jobs:
options: --privileged
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -317,7 +330,7 @@ jobs:
if: github.repository == 'tailscale/tailscale'
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -373,31 +386,29 @@ jobs:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-
- name: build all
working-directory: src
run: ./tool/go build ./cmd/...
@ -418,6 +429,17 @@ jobs:
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
ios: # similar to cross above, but iOS can't build most of the repo. So, just
# make it build a few smoke packages.
@ -425,7 +447,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -463,31 +485,29 @@ jobs:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-
- name: build core
working-directory: src
run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled
@ -501,6 +521,17 @@ jobs:
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
android:
# similar to cross above, but android fails to build a few pieces of the
@ -510,7 +541,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
# Super minimal Android build that doesn't even use CGO and doesn't build everything that's needed
@ -535,31 +566,29 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-go-2-
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache
id: restore-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only restoring the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-
${{ runner.os }}-js-wasm-go-
- name: build tsconnect client
working-directory: src
run: ./tool/go build ./cmd/tsconnect/wasm ./cmd/tailscale/cli
@ -578,13 +607,24 @@ jobs:
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
tailscale_go: # Subset of tests that depend on our custom Go toolchain.
runs-on: ubuntu-24.04
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set GOMODCACHE env
run: echo "GOMODCACHE=$HOME/.cache/go-mod" >> $GITHUB_ENV
- name: Restore Go module cache
@ -669,7 +709,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Set GOMODCACHE env
@ -689,7 +729,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -713,7 +753,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -735,7 +775,7 @@ jobs:
needs: gomod-cache
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache
@ -789,7 +829,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Restore Go module cache

@ -21,7 +21,7 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Run update-flakes
run: ./update-flake.sh
@ -35,7 +35,7 @@ jobs:
private-key: ${{ secrets.CODE_UPDATER_APP_PRIVATE_KEY }}
- name: Send pull request
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e #v7.0.8
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 #v8.0.0
with:
token: ${{ steps.generate-token.outputs.token }}
author: Flakes Updater <noreply+flakes-updater@tailscale.com>

@ -14,7 +14,7 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Run go get
run: |
@ -32,7 +32,7 @@ jobs:
- name: Send pull request
id: pull-request
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e #v7.0.8
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 #v8.0.0
with:
token: ${{ steps.generate-token.outputs.token }}
author: OSS Updater <noreply+oss-updater@tailscale.com>

@ -25,7 +25,7 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src

@ -22,7 +22,7 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install deps
run: ./tool/yarn --cwd client/web
- name: Run lint

3
.gitignore vendored

@ -52,3 +52,6 @@ client/web/build/assets
# Ignore personal IntelliJ settings
.idea/
# Ignore syncthing state directory.
/.stfolder

@ -0,0 +1 @@
.gitignore

@ -1,17 +0,0 @@
# This is the official list of Tailscale
# authors for copyright purposes.
#
# Names should be added to this file as one of
# Organization's name
# Individual's name <submission email address>
# Individual's name <submission email address> <email2> <emailN>
#
# Please keep the list sorted.
#
# You do not need to add entries to this list, and we don't actively
# populate this list. If you do want to be acknowledged explicitly as
# a copyright holder, though, then please send a PR referencing your
# earlier contributions and clarifying whether it's you or your
# company that owns the rights to your contribution.
Tailscale Inc.

@ -1,4 +1,4 @@
# Copyright (c) Tailscale Inc & AUTHORS
# Copyright (c) Tailscale Inc & contributors
# SPDX-License-Identifier: BSD-3-Clause
# Note that this Dockerfile is currently NOT used to build any of the published
@ -73,8 +73,13 @@ RUN GOARCH=$TARGETARCH go install -ldflags="\
FROM alpine:3.22
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables
RUN ln -s /sbin/iptables-legacy /sbin/iptables
RUN ln -s /sbin/ip6tables-legacy /sbin/ip6tables
# Alpine 3.19 replaced legacy iptables with nftables based implementation.
# Tailscale is used on some hosts that don't support nftables, such as Synology
# NAS, so link iptables back to legacy version. Hosts that don't require legacy
# iptables should be able to use Tailscale in nftables mode. See
# https://github.com/tailscale/tailscale/issues/17854
RUN rm /usr/sbin/iptables && ln -s /usr/sbin/iptables-legacy /usr/sbin/iptables
RUN rm /usr/sbin/ip6tables && ln -s /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be

@ -1,12 +1,12 @@
# Copyright (c) Tailscale Inc & AUTHORS
# Copyright (c) Tailscale Inc & contributors
# SPDX-License-Identifier: BSD-3-Clause
FROM alpine:3.22
RUN apk add --no-cache ca-certificates iptables iptables-legacy iproute2 ip6tables iputils
# Alpine 3.19 replaced legacy iptables with nftables based implementation. We
# can't be certain that all hosts that run Tailscale containers currently
# suppport nftables, so link back to legacy for backwards compatibility reasons.
# TODO(irbekrm): add some way how to determine if we still run on nodes that
# don't support nftables, so that we can eventually remove these symlinks.
RUN ln -s /sbin/iptables-legacy /sbin/iptables
RUN ln -s /sbin/ip6tables-legacy /sbin/ip6tables
# Alpine 3.19 replaced legacy iptables with nftables based implementation.
# Tailscale is used on some hosts that don't support nftables, such as Synology
# NAS, so link iptables back to legacy version. Hosts that don't require legacy
# iptables should be able to use Tailscale in nftables mode. See
# https://github.com/tailscale/tailscale/issues/17854
RUN rm /usr/sbin/iptables && ln -s /usr/sbin/iptables-legacy /usr/sbin/iptables
RUN rm /usr/sbin/ip6tables && ln -s /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables

@ -1,6 +1,6 @@
BSD 3-Clause License
Copyright (c) 2020 Tailscale Inc & AUTHORS.
Copyright (c) 2020 Tailscale Inc & contributors.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

@ -1 +1 @@
1.91.0
1.95.0

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package appc implements App Connectors.

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package appc

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package appctest contains code to help test App Connectors.

@ -0,0 +1,173 @@
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"cmp"
"net/netip"
"slices"
"sync"
"tailscale.com/tailcfg"
"tailscale.com/types/appctype"
"tailscale.com/util/mak"
"tailscale.com/util/set"
)
// Conn25 holds the developing state for the as yet nascent next generation app connector.
// There is currently (2025-12-08) no actual app connecting functionality.
type Conn25 struct {
mu sync.Mutex
transitIPs map[tailcfg.NodeID]map[netip.Addr]netip.Addr
}
const dupeTransitIPMessage = "Duplicate transit address in ConnectorTransitIPRequest"
// HandleConnectorTransitIPRequest creates a ConnectorTransitIPResponse in response to a ConnectorTransitIPRequest.
// It updates the connectors mapping of TransitIP->DestinationIP per peer (tailcfg.NodeID).
// If a peer has stored this mapping in the connector Conn25 will route traffic to TransitIPs to DestinationIPs for that peer.
func (c *Conn25) HandleConnectorTransitIPRequest(nid tailcfg.NodeID, ctipr ConnectorTransitIPRequest) ConnectorTransitIPResponse {
resp := ConnectorTransitIPResponse{}
seen := map[netip.Addr]bool{}
for _, each := range ctipr.TransitIPs {
if seen[each.TransitIP] {
resp.TransitIPs = append(resp.TransitIPs, TransitIPResponse{
Code: OtherFailure,
Message: dupeTransitIPMessage,
})
continue
}
tipresp := c.handleTransitIPRequest(nid, each)
seen[each.TransitIP] = true
resp.TransitIPs = append(resp.TransitIPs, tipresp)
}
return resp
}
func (c *Conn25) handleTransitIPRequest(nid tailcfg.NodeID, tipr TransitIPRequest) TransitIPResponse {
c.mu.Lock()
defer c.mu.Unlock()
if c.transitIPs == nil {
c.transitIPs = make(map[tailcfg.NodeID]map[netip.Addr]netip.Addr)
}
peerMap, ok := c.transitIPs[nid]
if !ok {
peerMap = make(map[netip.Addr]netip.Addr)
c.transitIPs[nid] = peerMap
}
peerMap[tipr.TransitIP] = tipr.DestinationIP
return TransitIPResponse{}
}
func (c *Conn25) transitIPTarget(nid tailcfg.NodeID, tip netip.Addr) netip.Addr {
c.mu.Lock()
defer c.mu.Unlock()
return c.transitIPs[nid][tip]
}
// TransitIPRequest details a single TransitIP allocation request from a client to a
// connector.
type TransitIPRequest struct {
// TransitIP is the intermediate destination IP that will be received at this
// connector and will be replaced by DestinationIP when performing DNAT.
TransitIP netip.Addr `json:"transitIP,omitzero"`
// DestinationIP is the final destination IP that connections to the TransitIP
// should be mapped to when performing DNAT.
DestinationIP netip.Addr `json:"destinationIP,omitzero"`
}
// ConnectorTransitIPRequest is the request body for a PeerAPI request to
// /connector/transit-ip and can include zero or more TransitIP allocation requests.
type ConnectorTransitIPRequest struct {
// TransitIPs is the list of requested mappings.
TransitIPs []TransitIPRequest `json:"transitIPs,omitempty"`
}
// TransitIPResponseCode appears in TransitIPResponse and signifies success or failure status.
type TransitIPResponseCode int
const (
// OK indicates that the mapping was created as requested.
OK TransitIPResponseCode = 0
// OtherFailure indicates that the mapping failed for a reason that does not have
// another relevant [TransitIPResponsecode].
OtherFailure TransitIPResponseCode = 1
)
// TransitIPResponse is the response to a TransitIPRequest
type TransitIPResponse struct {
// Code is an error code indicating success or failure of the [TransitIPRequest].
Code TransitIPResponseCode `json:"code,omitzero"`
// Message is an error message explaining what happened, suitable for logging but
// not necessarily suitable for displaying in a UI to non-technical users. It
// should be empty when [Code] is [OK].
Message string `json:"message,omitzero"`
}
// ConnectorTransitIPResponse is the response to a ConnectorTransitIPRequest
type ConnectorTransitIPResponse struct {
// TransitIPs is the list of outcomes for each requested mapping. Elements
// correspond to the order of [ConnectorTransitIPRequest.TransitIPs].
TransitIPs []TransitIPResponse `json:"transitIPs,omitempty"`
}
const AppConnectorsExperimentalAttrName = "tailscale.com/app-connectors-experimental"
// PickSplitDNSPeers looks at the netmap peers capabilities and finds which peers
// want to be connectors for which domains.
func PickSplitDNSPeers(hasCap func(c tailcfg.NodeCapability) bool, self tailcfg.NodeView, peers map[tailcfg.NodeID]tailcfg.NodeView) map[string][]tailcfg.NodeView {
var m map[string][]tailcfg.NodeView
if !hasCap(AppConnectorsExperimentalAttrName) {
return m
}
apps, err := tailcfg.UnmarshalNodeCapViewJSON[appctype.AppConnectorAttr](self.CapMap(), AppConnectorsExperimentalAttrName)
if err != nil {
return m
}
tagToDomain := make(map[string][]string)
for _, app := range apps {
for _, tag := range app.Connectors {
tagToDomain[tag] = append(tagToDomain[tag], app.Domains...)
}
}
// NodeIDs are Comparable, and we have a map of NodeID to NodeView anyway, so
// use a Set of NodeIDs to deduplicate, and populate into a []NodeView later.
var work map[string]set.Set[tailcfg.NodeID]
for _, peer := range peers {
if !peer.Valid() || !peer.Hostinfo().Valid() {
continue
}
if isConn, _ := peer.Hostinfo().AppConnector().Get(); !isConn {
continue
}
for _, t := range peer.Tags().All() {
domains := tagToDomain[t]
for _, domain := range domains {
if work[domain] == nil {
mak.Set(&work, domain, set.Set[tailcfg.NodeID]{})
}
work[domain].Add(peer.ID())
}
}
}
// Populate m. Make a []tailcfg.NodeView from []tailcfg.NodeID using the peers map.
// And sort it to our preference.
for domain, ids := range work {
nodes := make([]tailcfg.NodeView, 0, ids.Len())
for id := range ids {
nodes = append(nodes, peers[id])
}
// The ordering of the nodes in the map vals is semantic (dnsConfigForNetmap uses the first node it can
// get a peer api url for as its split dns target). We can think of it as a preference order, except that
// we don't (currently 2026-01-14) have any preference over which node is chosen.
slices.SortFunc(nodes, func(a, b tailcfg.NodeView) int {
return cmp.Compare(a.ID(), b.ID())
})
mak.Set(&m, domain, nodes)
}
return m
}

@ -0,0 +1,311 @@
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"encoding/json"
"net/netip"
"reflect"
"testing"
"tailscale.com/tailcfg"
"tailscale.com/types/appctype"
"tailscale.com/types/opt"
)
// TestHandleConnectorTransitIPRequestZeroLength tests that if sent a
// ConnectorTransitIPRequest with 0 TransitIPRequests, we respond with a
// ConnectorTransitIPResponse with 0 TransitIPResponses.
func TestHandleConnectorTransitIPRequestZeroLength(t *testing.T) {
c := &Conn25{}
req := ConnectorTransitIPRequest{}
nid := tailcfg.NodeID(1)
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 0 {
t.Fatalf("n TransitIPs in response: %d, want 0", len(resp.TransitIPs))
}
}
// TestHandleConnectorTransitIPRequestStoresAddr tests that if sent a
// request with a transit addr and a destination addr we store that mapping
// and can retrieve it. If sent another req with a different dst for that transit addr
// we store that instead.
func TestHandleConnectorTransitIPRequestStoresAddr(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
mr := func(t, d netip.Addr) ConnectorTransitIPRequest {
return ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: t, DestinationIP: d},
},
}
}
resp := c.HandleConnectorTransitIPRequest(nid, mr(tip, dip))
if len(resp.TransitIPs) != 1 {
t.Fatalf("n TransitIPs in response: %d, want 1", len(resp.TransitIPs))
}
got := resp.TransitIPs[0].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("TransitIP Code: %d, want 0", got)
}
gotAddr := c.transitIPTarget(nid, tip)
if gotAddr != dip {
t.Fatalf("Connector stored destination for tip: %v, want %v", gotAddr, dip)
}
// mapping can be overwritten
resp2 := c.HandleConnectorTransitIPRequest(nid, mr(tip, dip2))
if len(resp2.TransitIPs) != 1 {
t.Fatalf("n TransitIPs in response: %d, want 1", len(resp2.TransitIPs))
}
got2 := resp.TransitIPs[0].Code
if got2 != TransitIPResponseCode(0) {
t.Fatalf("TransitIP Code: %d, want 0", got2)
}
gotAddr2 := c.transitIPTarget(nid, tip)
if gotAddr2 != dip2 {
t.Fatalf("Connector stored destination for tip: %v, want %v", gotAddr, dip2)
}
}
// TestHandleConnectorTransitIPRequestMultipleTIP tests that we can
// get a req with multiple mappings and we store them all. Including
// multiple transit addrs for the same destination.
func TestHandleConnectorTransitIPRequestMultipleTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
tip2 := netip.MustParseAddr("0.0.0.2")
tip3 := netip.MustParseAddr("0.0.0.3")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
req := ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: tip, DestinationIP: dip},
{TransitIP: tip2, DestinationIP: dip2},
// can store same dst addr for multiple transit addrs
{TransitIP: tip3, DestinationIP: dip},
},
}
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 3 {
t.Fatalf("n TransitIPs in response: %d, want 3", len(resp.TransitIPs))
}
for i := 0; i < 3; i++ {
got := resp.TransitIPs[i].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("i=%d TransitIP Code: %d, want 0", i, got)
}
}
gotAddr1 := c.transitIPTarget(nid, tip)
if gotAddr1 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip, gotAddr1, dip)
}
gotAddr2 := c.transitIPTarget(nid, tip2)
if gotAddr2 != dip2 {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip2, gotAddr2, dip2)
}
gotAddr3 := c.transitIPTarget(nid, tip3)
if gotAddr3 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip3, gotAddr3, dip)
}
}
// TestHandleConnectorTransitIPRequestSameTIP tests that if we get
// a req that has more than one TransitIPRequest for the same transit addr
// only the first is stored, and the subsequent ones get an error code and
// message in the response.
func TestHandleConnectorTransitIPRequestSameTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
tip2 := netip.MustParseAddr("0.0.0.2")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
dip3 := netip.MustParseAddr("1.2.3.6")
req := ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: tip, DestinationIP: dip},
// cannot have dupe TransitIPs in one ConnectorTransitIPRequest
{TransitIP: tip, DestinationIP: dip2},
{TransitIP: tip2, DestinationIP: dip3},
},
}
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 3 {
t.Fatalf("n TransitIPs in response: %d, want 3", len(resp.TransitIPs))
}
got := resp.TransitIPs[0].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("i=0 TransitIP Code: %d, want 0", got)
}
msg := resp.TransitIPs[0].Message
if msg != "" {
t.Fatalf("i=0 TransitIP Message: \"%s\", want \"%s\"", msg, "")
}
got1 := resp.TransitIPs[1].Code
if got1 != TransitIPResponseCode(1) {
t.Fatalf("i=1 TransitIP Code: %d, want 1", got1)
}
msg1 := resp.TransitIPs[1].Message
if msg1 != dupeTransitIPMessage {
t.Fatalf("i=1 TransitIP Message: \"%s\", want \"%s\"", msg1, dupeTransitIPMessage)
}
got2 := resp.TransitIPs[2].Code
if got2 != TransitIPResponseCode(0) {
t.Fatalf("i=2 TransitIP Code: %d, want 0", got2)
}
msg2 := resp.TransitIPs[2].Message
if msg2 != "" {
t.Fatalf("i=2 TransitIP Message: \"%s\", want \"%s\"", msg, "")
}
gotAddr1 := c.transitIPTarget(nid, tip)
if gotAddr1 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip, gotAddr1, dip)
}
gotAddr2 := c.transitIPTarget(nid, tip2)
if gotAddr2 != dip3 {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip2, gotAddr2, dip3)
}
}
// TestGetDstIPUnknownTIP tests that unknown transit addresses can be looked up without problem.
func TestTransitIPTargetUnknownTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
got := c.transitIPTarget(nid, tip)
want := netip.Addr{}
if got != want {
t.Fatalf("Unknown transit addr, want: %v, got %v", want, got)
}
}
func TestPickSplitDNSPeers(t *testing.T) {
getBytesForAttr := func(name string, domains []string, tags []string) []byte {
attr := appctype.AppConnectorAttr{
Name: name,
Domains: domains,
Connectors: tags,
}
bs, err := json.Marshal(attr)
if err != nil {
t.Fatalf("test setup: %v", err)
}
return bs
}
appOneBytes := getBytesForAttr("app1", []string{"example.com"}, []string{"tag:one"})
appTwoBytes := getBytesForAttr("app2", []string{"a.example.com"}, []string{"tag:two"})
appThreeBytes := getBytesForAttr("app3", []string{"woo.b.example.com", "hoo.b.example.com"}, []string{"tag:three1", "tag:three2"})
appFourBytes := getBytesForAttr("app4", []string{"woo.b.example.com", "c.example.com"}, []string{"tag:four1", "tag:four2"})
makeNodeView := func(id tailcfg.NodeID, name string, tags []string) tailcfg.NodeView {
return (&tailcfg.Node{
ID: id,
Name: name,
Tags: tags,
Hostinfo: (&tailcfg.Hostinfo{AppConnector: opt.NewBool(true)}).View(),
}).View()
}
nvp1 := makeNodeView(1, "p1", []string{"tag:one"})
nvp2 := makeNodeView(2, "p2", []string{"tag:four1", "tag:four2"})
nvp3 := makeNodeView(3, "p3", []string{"tag:two", "tag:three1"})
nvp4 := makeNodeView(4, "p4", []string{"tag:two", "tag:three2", "tag:four2"})
for _, tt := range []struct {
name string
want map[string][]tailcfg.NodeView
peers []tailcfg.NodeView
config []tailcfg.RawMessage
}{
{
name: "empty",
},
{
name: "bad-config", // bad config should return a nil map rather than error.
config: []tailcfg.RawMessage{tailcfg.RawMessage(`hey`)},
},
{
name: "no-peers",
config: []tailcfg.RawMessage{tailcfg.RawMessage(appOneBytes)},
},
{
name: "peers-that-are-not-connectors",
config: []tailcfg.RawMessage{tailcfg.RawMessage(appOneBytes)},
peers: []tailcfg.NodeView{
(&tailcfg.Node{
ID: 5,
Name: "p5",
Tags: []string{"tag:one"},
}).View(),
(&tailcfg.Node{
ID: 6,
Name: "p6",
Tags: []string{"tag:one"},
}).View(),
},
},
{
name: "peers-that-dont-match-tags",
config: []tailcfg.RawMessage{tailcfg.RawMessage(appOneBytes)},
peers: []tailcfg.NodeView{
makeNodeView(5, "p5", []string{"tag:seven"}),
makeNodeView(6, "p6", nil),
},
},
{
name: "matching-tagged-connector-peers",
config: []tailcfg.RawMessage{
tailcfg.RawMessage(appOneBytes),
tailcfg.RawMessage(appTwoBytes),
tailcfg.RawMessage(appThreeBytes),
tailcfg.RawMessage(appFourBytes),
},
peers: []tailcfg.NodeView{
nvp1,
nvp2,
nvp3,
nvp4,
makeNodeView(5, "p5", nil),
},
want: map[string][]tailcfg.NodeView{
// p5 has no matching tags and so doesn't appear
"example.com": {nvp1},
"a.example.com": {nvp3, nvp4},
"woo.b.example.com": {nvp2, nvp3, nvp4},
"hoo.b.example.com": {nvp3, nvp4},
"c.example.com": {nvp2, nvp4},
},
},
} {
t.Run(tt.name, func(t *testing.T) {
selfNode := &tailcfg.Node{}
if tt.config != nil {
selfNode.CapMap = tailcfg.NodeCapMap{
tailcfg.NodeCapability(AppConnectorsExperimentalAttrName): tt.config,
}
}
selfView := selfNode.View()
peers := map[tailcfg.NodeID]tailcfg.NodeView{}
for _, p := range tt.peers {
peers[p.ID()] = p
}
got := PickSplitDNSPeers(func(_ tailcfg.NodeCapability) bool {
return true
}, selfView, peers)
if !reflect.DeepEqual(got, tt.want) {
t.Fatalf("got %v, want %v", got, tt.want)
}
})
}
}

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package appc

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package appc

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_appconnectors

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build ts_omit_appconnectors

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build tailscale_go
@ -17,6 +17,9 @@ func init() {
panic("binary built with tailscale_go build tag but failed to read build info or find tailscale.toolchain.rev in build info")
}
want := strings.TrimSpace(GoToolchainRev)
if os.Getenv("TS_GO_NEXT") == "1" {
want = strings.TrimSpace(GoToolchainNextRev)
}
if tsRev != want {
if os.Getenv("TS_PERMIT_TOOLCHAIN_MISMATCH") == "1" {
fmt.Fprintf(os.Stderr, "tailscale.toolchain.rev = %q, want %q; but ignoring due to TS_PERMIT_TOOLCHAIN_MISMATCH=1\n", tsRev, want)

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package atomicfile contains code related to writing to filesystems

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !windows

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !windows

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package chirp implements a client to communicate with the BIRD Internet

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package chirp

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !ts_omit_acme

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_debugportmapper

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package local contains a Go client for the Tailscale LocalAPI.
@ -43,6 +43,7 @@ import (
"tailscale.com/types/appctype"
"tailscale.com/types/dnstype"
"tailscale.com/types/key"
"tailscale.com/util/clientmetric"
"tailscale.com/util/eventbus"
)
@ -385,18 +386,14 @@ func (lc *Client) IncrementCounter(ctx context.Context, name string, delta int)
if !buildfeatures.HasClientMetrics {
return nil
}
type metricUpdate struct {
Name string `json:"name"`
Type string `json:"type"`
Value int `json:"value"` // amount to increment by
}
if delta < 0 {
return errors.New("negative delta not allowed")
}
_, err := lc.send(ctx, "POST", "/localapi/v0/upload-client-metrics", 200, jsonBody([]metricUpdate{{
_, err := lc.send(ctx, "POST", "/localapi/v0/upload-client-metrics", 200, jsonBody([]clientmetric.MetricUpdate{{
Name: name,
Type: "counter",
Value: delta,
Op: "add",
}}))
return err
}
@ -405,15 +402,23 @@ func (lc *Client) IncrementCounter(ctx context.Context, name string, delta int)
// metric by the given delta. If the metric has yet to exist, a new gauge
// metric is created and initialized to delta. The delta value can be negative.
func (lc *Client) IncrementGauge(ctx context.Context, name string, delta int) error {
type metricUpdate struct {
Name string `json:"name"`
Type string `json:"type"`
Value int `json:"value"` // amount to increment by
}
_, err := lc.send(ctx, "POST", "/localapi/v0/upload-client-metrics", 200, jsonBody([]metricUpdate{{
_, err := lc.send(ctx, "POST", "/localapi/v0/upload-client-metrics", 200, jsonBody([]clientmetric.MetricUpdate{{
Name: name,
Type: "gauge",
Value: delta,
Op: "add",
}}))
return err
}
// SetGauge sets the value of a Tailscale daemon's gauge metric to the given value.
// If the metric has yet to exist, a new gauge metric is created and initialized to value.
func (lc *Client) SetGauge(ctx context.Context, name string, value int) error {
_, err := lc.send(ctx, "POST", "/localapi/v0/upload-client-metrics", 200, jsonBody([]clientmetric.MetricUpdate{{
Name: name,
Type: "gauge",
Value: value,
Op: "set",
}}))
return err
}

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_serve

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_syspolicy

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_tailnetlock

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
@ -66,8 +66,8 @@ func (menu *Menu) Run(client *local.Client) {
case <-menu.bgCtx.Done():
}
}()
go menu.lc.IncrementGauge(menu.bgCtx, "systray_running", 1)
defer menu.lc.IncrementGauge(menu.bgCtx, "systray_running", -1)
go menu.lc.SetGauge(menu.bgCtx, "systray_running", 1)
defer menu.lc.SetGauge(menu.bgCtx, "systray_running", 0)
systray.Run(menu.onReady, menu.onExit)
}
@ -372,6 +372,7 @@ func setRemoteIcon(menu *systray.MenuItem, urlStr string) {
}
cacheMu.Lock()
defer cacheMu.Unlock()
b, ok := httpCache[urlStr]
if !ok {
resp, err := http.Get(urlStr)
@ -395,7 +396,6 @@ func setRemoteIcon(menu *systray.MenuItem, urlStr string) {
resp.Body.Close()
}
}
cacheMu.Unlock()
if len(b) > 0 {
menu.SetIcon(b)

@ -1,6 +1,6 @@
[Unit]
Description=Tailscale System Tray
After=systemd.service
After=graphical.target
[Service]
Type=simple

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Package apitype contains types for the Tailscale LocalAPI and control plane API.

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package apitype

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !ts_omit_acme

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// The servetls program shows how to run an HTTPS server

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package tailscale

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package tailscale

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build !go1.23

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package tailscale

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package web

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
package web

@ -34,10 +34,10 @@
"prettier-plugin-organize-imports": "^3.2.2",
"tailwindcss": "^3.3.3",
"typescript": "^5.3.3",
"vite": "^5.1.7",
"vite": "^5.4.21",
"vite-plugin-svgr": "^4.2.0",
"vite-tsconfig-paths": "^3.5.0",
"vitest": "^1.3.1"
"vitest": "^1.6.1"
},
"resolutions": {
"@typescript-eslint/eslint-plugin": "^6.2.1",

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// qnap.go contains handlers and logic, such as authentication,

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useCallback } from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import * as Primitive from "@radix-ui/react-popover"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import React from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useCallback, useEffect, useState } from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useMemo } from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useCallback, useEffect, useState } from "react"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useRawToasterForHook } from "src/ui/toaster"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { useCallback, useEffect, useState } from "react"

@ -1,10 +1,10 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
// Preserved js license comment for web client app.
/**
* @license
* Copyright (c) Tailscale Inc & AUTHORS
* Copyright (c) Tailscale Inc & contributors
* SPDX-License-Identifier: BSD-3-Clause
*/

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import { assertNever } from "src/utils/util"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

@ -1,4 +1,4 @@
// Copyright (c) Tailscale Inc & AUTHORS
// Copyright (c) Tailscale Inc & contributors
// SPDX-License-Identifier: BSD-3-Clause
import cx from "classnames"

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save