Compare commits

..

8 Commits

Author SHA1 Message Date
Nick Khyl 1b41fdeddb VERSION.txt: this is v1.78.3
Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 year ago
Nick Khyl 3037dc793c VERSION.txt: this is v1.78.2
Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 year ago
Irbe Krumina 6e0f168db0
cmd/containerboot: fix nil pointer exception (cherry-pick of #14357, #14358) (#14359)
* cmd/containerboot: guard kubeClient against nil dereference (#14357)

A method on kc was called unconditionally, even if was not initialized,
leading to a nil pointer dereference when TS_SERVE_CONFIG was set
outside Kubernetes.

Add a guard symmetric with other uses of the kubeClient.

Signed-off-by: Bjorn Neergaard <bjorn@neersighted.com>
(cherry picked from commit 8b1d01161b)

* cmd/containerboot: don't attempt to write kube Secret in non-kube environments (#14358)

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
(cherry picked from commit 0cc071f154)

* cmd/containerboot: don't attempt to patch a Secret field without permissions (#14365)

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
(cherry picked from commit 6e552f66a0)

Updates tailscale/tailscale#14354
1 year ago
Tom Proctor 3e3d5d8c68
hostinfo: fix testing in container (#14330) (#14337)
Previously this unit test failed if it was run in a container. Update the assert
to focus on exactly the condition we are trying to assert: the package type
should only be 'container' if we use the build tag.

Updates #14317

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
(cherry picked from commit 06c5e83c20)
1 year ago
Brad Fitzpatrick c80eb698d5 VERSION.txt: this is v1.78.1
Change-Id: I3588027fee8460b27c357d3a656f769fda151ccc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 year ago
Brad Fitzpatrick 1aef3e83b8 health: fix TestHealthMetric to pass on release branch
Fixes #14302

Change-Id: I9fd893a97711c72b713fe5535f2ccb93fadf7452
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
(cherry picked from commit dc6728729e)
1 year ago
Brad Fitzpatrick 2690b4762f Revert "VERSION.txt: this is v1.78.0"
This reverts commit 0267fe83b2.

Reason: it converted the tree to Windows line endings.

Updates #14299

Change-Id: I2271a61d43e99bd0bbcf9f4831e8783e570ba08a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1 year ago
Nick Khyl 0267fe83b2 VERSION.txt: this is v1.78.0
Signed-off-by: Nick Khyl <nickk@tailscale.com>
1 year ago

@ -1,17 +0,0 @@
PRs welcome! But please file bugs first and explain the problem or
motivation. For new or changed functionality, strike up a discussion
and get agreement on the design/solution before spending too much time writing
code.
Commit messages should [reference
bugs](https://docs.github.com/en/github/writing-on-github/autolinked-references-and-urls).
We require [Developer Certificate of
Origin](https://en.wikipedia.org/wiki/Developer_Certificate_of_Origin) (DCO)
`Signed-off-by` lines in commits. (`git commit -s`)
Please squash your code review edits & force push. Multiple commits in
a PR are fine, but only if they're each logically separate and all tests pass
at each stage. No fixup commits.
See [commit-messages.md](docs/commit-messages.md) (or skim `git log`) for our commit message style.

@ -1,54 +0,0 @@
#!/usr/bin/env bash
#
# This script sets up cigocacher, but should never fail the build if unsuccessful.
# It expects to run on a GitHub-hosted runner, and connects to cigocached over a
# private Azure network that is configured at the runner group level in GitHub.
#
# Usage: ./action.sh
# Inputs:
# URL: The cigocached server URL.
# HOST: The cigocached server host to dial.
# Outputs:
# success: Whether cigocacher was set up successfully.
set -euo pipefail
if [ -z "${GITHUB_ACTIONS:-}" ]; then
echo "This script is intended to run within GitHub Actions"
exit 1
fi
if [ -z "${URL:-}" ]; then
echo "No cigocached URL is set, skipping cigocacher setup"
exit 0
fi
GOPATH=$(command -v go || true)
if [ -z "${GOPATH}" ]; then
if [ ! -f "tool/go" ]; then
echo "Go not available, unable to proceed"
exit 1
fi
GOPATH="./tool/go"
fi
BIN_PATH="${RUNNER_TEMP:-/tmp}/cigocacher$(${GOPATH} env GOEXE)"
if [ -d "cmd/cigocacher" ]; then
echo "cmd/cigocacher found locally, building from local source"
"${GOPATH}" build -o "${BIN_PATH}" ./cmd/cigocacher
else
echo "cmd/cigocacher not found locally, fetching from tailscale.com/cmd/cigocacher"
"${GOPATH}" build -o "${BIN_PATH}" tailscale.com/cmd/cigocacher
fi
CIGOCACHER_TOKEN="$("${BIN_PATH}" --auth --cigocached-url "${URL}" --cigocached-host "${HOST}" )"
if [ -z "${CIGOCACHER_TOKEN:-}" ]; then
echo "Failed to fetch cigocacher token, skipping cigocacher setup"
exit 0
fi
echo "Fetched cigocacher token successfully"
echo "::add-mask::${CIGOCACHER_TOKEN}"
echo "GOCACHEPROG=${BIN_PATH} --cache-dir ${CACHE_DIR} --cigocached-url ${URL} --cigocached-host ${HOST} --token ${CIGOCACHER_TOKEN}" >> "${GITHUB_ENV}"
echo "success=true" >> "${GITHUB_OUTPUT}"

@ -1,35 +0,0 @@
name: go-cache
description: Set up build to use cigocacher
inputs:
cigocached-url:
description: URL of the cigocached server
required: true
cigocached-host:
description: Host to dial for the cigocached server
required: true
checkout-path:
description: Path to cloned repository
required: true
cache-dir:
description: Directory to use for caching
required: true
outputs:
success:
description: Whether cigocacher was set up successfully
value: ${{ steps.setup.outputs.success }}
runs:
using: composite
steps:
- name: Setup cigocacher
id: setup
shell: bash
env:
URL: ${{ inputs.cigocached-url }}
HOST: ${{ inputs.cigocached-host }}
CACHE_DIR: ${{ inputs.cache-dir }}
working-directory: ${{ inputs.checkout-path }}
# https://github.com/orgs/community/discussions/25910
run: $GITHUB_ACTION_PATH/action.sh

@ -10,7 +10,7 @@ on:
- '.github/workflows/checklocks.yml' - '.github/workflows/checklocks.yml'
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -18,7 +18,7 @@ jobs:
runs-on: [ ubuntu-latest ] runs-on: [ ubuntu-latest ]
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Build checklocks - name: Build checklocks
run: ./tool/go build -o /tmp/checklocks gvisor.dev/gvisor/tools/checklocks/cmd/checklocks run: ./tool/go build -o /tmp/checklocks gvisor.dev/gvisor/tools/checklocks/cmd/checklocks

@ -1,73 +0,0 @@
name: Build cigocacher
on:
# Released on-demand. The commit will be used as part of the tag, so generally
# prefer to release from main where the commit is stable in linear history.
workflow_dispatch:
jobs:
build:
strategy:
matrix:
GOOS: ["linux", "darwin", "windows"]
GOARCH: ["amd64", "arm64"]
runs-on: ubuntu-24.04
env:
GOOS: "${{ matrix.GOOS }}"
GOARCH: "${{ matrix.GOARCH }}"
CGO_ENABLED: "0"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Build
run: |
OUT="cigocacher$(./tool/go env GOEXE)"
./tool/go build -o "${OUT}" ./cmd/cigocacher/
tar -zcf cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}.tar.gz "${OUT}"
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}
path: cigocacher-${{ matrix.GOOS }}-${{ matrix.GOARCH }}.tar.gz
release:
runs-on: ubuntu-24.04
needs: build
permissions:
contents: write
steps:
- name: Download all artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
pattern: 'cigocacher-*'
merge-multiple: true
# This step is a simplified version of actions/create-release and
# actions/upload-release-asset, which are archived and unmaintained.
- name: Create release
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const fs = require('fs');
const path = require('path');
const { data: release } = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: `cmd/cigocacher/${{ github.sha }}`,
name: `cigocacher-${{ github.sha }}`,
draft: false,
prerelease: true,
target_commitish: `${{ github.sha }}`
});
const files = fs.readdirSync('.').filter(f => f.endsWith('.tar.gz'));
for (const file of files) {
await github.rest.repos.uploadReleaseAsset({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: release.id,
name: file,
data: fs.readFileSync(file)
});
console.log(`Uploaded ${file}`);
}

@ -23,7 +23,7 @@ on:
- cron: '31 14 * * 5' - cron: '31 14 * * 5'
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -45,17 +45,17 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
# Install a more recent Go that understands modern go.mod content. # Install a more recent Go that understands modern go.mod content.
- name: Install Go - name: Install Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with: with:
go-version-file: go.mod go-version-file: go.mod
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@76621b61decf072c1cee8dd1ce2d2a82d33c17ed # v3.29.5 uses: github/codeql-action/init@4f3212b61783c3c68e8309a0f18a699764811cda # v3.27.1
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@ -66,7 +66,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below) # If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild - name: Autobuild
uses: github/codeql-action/autobuild@76621b61decf072c1cee8dd1ce2d2a82d33c17ed # v3.29.5 uses: github/codeql-action/autobuild@4f3212b61783c3c68e8309a0f18a699764811cda # v3.27.1
# Command-line programs to run using the OS shell. # Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl # 📚 https://git.io/JvXDl
@ -80,4 +80,4 @@ jobs:
# make release # make release
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@76621b61decf072c1cee8dd1ce2d2a82d33c17ed # v3.29.5 uses: github/codeql-action/analyze@4f3212b61783c3c68e8309a0f18a699764811cda # v3.27.1

@ -1,29 +0,0 @@
name: "Validate Docker base image"
on:
workflow_dispatch:
pull_request:
paths:
- "Dockerfile.base"
- ".github/workflows/docker-base.yml"
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: "build and test"
run: |
set -e
IMG="test-base:$(head -c 8 /dev/urandom | xxd -p)"
docker build -t "$IMG" -f Dockerfile.base .
iptables_version=$(docker run --rm "$IMG" iptables --version)
if [[ "$iptables_version" != *"(legacy)"* ]]; then
echo "ERROR: Docker base image should contain legacy iptables; found ${iptables_version}"
exit 1
fi
ip6tables_version=$(docker run --rm "$IMG" ip6tables --version)
if [[ "$ip6tables_version" != *"(legacy)"* ]]; then
echo "ERROR: Docker base image should contain legacy ip6tables; found ${ip6tables_version}"
exit 1
fi

@ -4,10 +4,12 @@ on:
branches: branches:
- main - main
pull_request: pull_request:
branches:
- "*"
jobs: jobs:
deploy: deploy:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: "Build Docker image" - name: "Build Docker image"
run: docker build . run: docker build .

@ -17,11 +17,11 @@ jobs:
id-token: "write" id-token: "write"
contents: "read" contents: "read"
steps: steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with: with:
ref: "${{ (inputs.tag != null) && format('refs/tags/{0}', inputs.tag) || '' }}" ref: "${{ (inputs.tag != null) && format('refs/tags/{0}', inputs.tag) || '' }}"
- uses: DeterminateSystems/nix-installer-action@786fff0690178f1234e4e1fe9b536e94f5433196 # v20 - uses: "DeterminateSystems/nix-installer-action@main"
- uses: DeterminateSystems/flakehub-push@71f57208810a5d299fc6545350981de98fdbc860 # v6 - uses: "DeterminateSystems/flakehub-push@main"
with: with:
visibility: "public" visibility: "public"
tag: "${{ inputs.tag }}" tag: "${{ inputs.tag }}"

@ -2,11 +2,7 @@ name: golangci-lint
on: on:
# For now, only lint pull requests, not the main branches. # For now, only lint pull requests, not the main branches.
pull_request: pull_request:
paths:
- ".github/workflows/golangci-lint.yml"
- "**.go"
- "go.mod"
- "go.sum"
# TODO(andrew): enable for main branch after an initial waiting period. # TODO(andrew): enable for main branch after an initial waiting period.
#push: #push:
# branches: # branches:
@ -19,7 +15,7 @@ permissions:
pull-requests: read pull-requests: read
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -27,21 +23,18 @@ jobs:
name: lint name: lint
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0 - uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with: with:
go-version-file: go.mod go-version-file: go.mod
cache: true cache: false
- name: golangci-lint - name: golangci-lint
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0 # Note: this is the 'v6.1.0' tag as of 2024-08-21
uses: golangci/golangci-lint-action@aaa42aa0628b4ae2578232a66b541047968fac86
with: with:
version: v2.4.0 version: v1.60
# Show only new issues if it's a pull request. # Show only new issues if it's a pull request.
only-new-issues: true only-new-issues: true
# Loading packages with a cold cache takes a while:
args: --timeout=10m

@ -14,7 +14,7 @@ jobs:
steps: steps:
- name: Check out code into the Go module directory - name: Check out code into the Go module directory
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Install govulncheck - name: Install govulncheck
run: ./tool/go install golang.org/x/vuln/cmd/govulncheck@latest run: ./tool/go install golang.org/x/vuln/cmd/govulncheck@latest
@ -24,13 +24,13 @@ jobs:
- name: Post to slack - name: Post to slack
if: failure() && github.event_name == 'schedule' if: failure() && github.event_name == 'schedule'
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1 uses: slackapi/slack-github-action@37ebaef184d7626c5f204ab8d3baff4262dd30f0 # v1.27.0
env:
SLACK_BOT_TOKEN: ${{ secrets.GOVULNCHECK_BOT_TOKEN }}
with: with:
method: chat.postMessage channel-id: 'C05PXRM304B'
token: ${{ secrets.GOVULNCHECK_BOT_TOKEN }}
payload: | payload: |
{ {
"channel": "C08FGKZCQTW",
"blocks": [ "blocks": [
{ {
"type": "section", "type": "section",

@ -1,18 +1,16 @@
name: test installer.sh name: test installer.sh
on: on:
schedule:
- cron: '0 15 * * *' # 10am EST (UTC-4/5)
push: push:
branches: branches:
- "main" - "main"
paths: paths:
- scripts/installer.sh - scripts/installer.sh
- .github/workflows/installer.yml
pull_request: pull_request:
branches:
- "*"
paths: paths:
- scripts/installer.sh - scripts/installer.sh
- .github/workflows/installer.yml
jobs: jobs:
test: test:
@ -31,11 +29,13 @@ jobs:
- "debian:stable-slim" - "debian:stable-slim"
- "debian:testing-slim" - "debian:testing-slim"
- "debian:sid-slim" - "debian:sid-slim"
- "ubuntu:18.04"
- "ubuntu:20.04" - "ubuntu:20.04"
- "ubuntu:22.04" - "ubuntu:22.04"
- "ubuntu:24.04" - "ubuntu:23.04"
- "elementary/docker:stable" - "elementary/docker:stable"
- "elementary/docker:unstable" - "elementary/docker:unstable"
- "parrotsec/core:lts-amd64"
- "parrotsec/core:latest" - "parrotsec/core:latest"
- "kalilinux/kali-rolling" - "kalilinux/kali-rolling"
- "kalilinux/kali-dev" - "kalilinux/kali-dev"
@ -48,7 +48,7 @@ jobs:
- "opensuse/leap:latest" - "opensuse/leap:latest"
- "opensuse/tumbleweed:latest" - "opensuse/tumbleweed:latest"
- "archlinux:latest" - "archlinux:latest"
- "alpine:3.21" - "alpine:3.14"
- "alpine:latest" - "alpine:latest"
- "alpine:edge" - "alpine:edge"
deps: deps:
@ -58,14 +58,10 @@ jobs:
# Check a few images with wget rather than curl. # Check a few images with wget rather than curl.
- { image: "debian:oldstable-slim", deps: "wget" } - { image: "debian:oldstable-slim", deps: "wget" }
- { image: "debian:sid-slim", deps: "wget" } - { image: "debian:sid-slim", deps: "wget" }
- { image: "debian:stable-slim", deps: "curl" } - { image: "ubuntu:23.04", deps: "wget" }
- { image: "ubuntu:24.04", deps: "curl" } # Ubuntu 16.04 also needs apt-transport-https installed.
- { image: "fedora:latest", deps: "curl" } - { image: "ubuntu:16.04", deps: "curl apt-transport-https" }
# Test TAILSCALE_VERSION pinning on a subset of distros. - { image: "ubuntu:16.04", deps: "wget apt-transport-https" }
# Skip Alpine as community repos don't reliably keep old versions.
- { image: "debian:stable-slim", deps: "curl", version: "1.80.0" }
- { image: "ubuntu:24.04", deps: "curl", version: "1.80.0" }
- { image: "fedora:latest", deps: "curl", version: "1.80.0" }
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ${{ matrix.image }} image: ${{ matrix.image }}
@ -80,10 +76,10 @@ jobs:
# tar and gzip are needed by the actions/checkout below. # tar and gzip are needed by the actions/checkout below.
run: yum install -y --allowerasing tar gzip ${{ matrix.deps }} run: yum install -y --allowerasing tar gzip ${{ matrix.deps }}
if: | if: |
contains(matrix.image, 'centos') || contains(matrix.image, 'centos')
contains(matrix.image, 'oraclelinux') || || contains(matrix.image, 'oraclelinux')
contains(matrix.image, 'fedora') || || contains(matrix.image, 'fedora')
contains(matrix.image, 'amazonlinux') || contains(matrix.image, 'amazonlinux')
- name: install dependencies (zypper) - name: install dependencies (zypper)
# tar and gzip are needed by the actions/checkout below. # tar and gzip are needed by the actions/checkout below.
run: zypper --non-interactive install tar gzip ${{ matrix.deps }} run: zypper --non-interactive install tar gzip ${{ matrix.deps }}
@ -93,51 +89,21 @@ jobs:
apt-get update apt-get update
apt-get install -y ${{ matrix.deps }} apt-get install -y ${{ matrix.deps }}
if: | if: |
contains(matrix.image, 'debian') || contains(matrix.image, 'debian')
contains(matrix.image, 'ubuntu') || || contains(matrix.image, 'ubuntu')
contains(matrix.image, 'elementary') || || contains(matrix.image, 'elementary')
contains(matrix.image, 'parrotsec') || || contains(matrix.image, 'parrotsec')
contains(matrix.image, 'kalilinux') || contains(matrix.image, 'kalilinux')
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 # We cannot use v4, as it requires a newer glibc version than some of the
# tested images provide. See
# https://github.com/actions/checkout/issues/1487
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0
- name: run installer - name: run installer
run: scripts/installer.sh run: scripts/installer.sh
env:
TAILSCALE_VERSION: ${{ matrix.version }}
# Package installation can fail in docker because systemd is not running # Package installation can fail in docker because systemd is not running
# as PID 1, so ignore errors at this step. The real check is the # as PID 1, so ignore errors at this step. The real check is the
# `tailscale --version` command below. # `tailscale --version` command below.
continue-on-error: true continue-on-error: true
- name: check tailscale version - name: check tailscale version
run: | run: tailscale --version
tailscale --version
if [ -n "${{ matrix.version }}" ]; then
tailscale --version | grep -q "^${{ matrix.version }}" || { echo "Version mismatch!"; exit 1; }
fi
notify-slack:
needs: test
runs-on: ubuntu-latest
steps:
- name: Notify Slack of failure on scheduled runs
if: failure() && github.event_name == 'schedule'
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1
with:
webhook: ${{ secrets.SLACK_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
{
"attachments": [{
"title": "Tailscale installer test failed",
"title_link": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}",
"text": "One or more OSes in the test matrix failed. See the run for details.",
"fields": [
{
"title": "Ref",
"value": "${{ github.ref_name }}",
"short": true
}
],
"footer": "${{ github.workflow }} on schedule",
"color": "danger"
}]
}

@ -9,7 +9,7 @@ on:
# Cancel workflow run if there is a newer push to the same PR for which it is # Cancel workflow run if there is a newer push to the same PR for which it is
# running # running
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -17,7 +17,7 @@ jobs:
runs-on: [ ubuntu-latest ] runs-on: [ ubuntu-latest ]
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Build and lint Helm chart - name: Build and lint Helm chart
run: | run: |
eval `./tool/go run ./cmd/mkversion` eval `./tool/go run ./cmd/mkversion`

@ -1,27 +0,0 @@
# Run some natlab integration tests.
# See https://github.com/tailscale/tailscale/issues/13038
name: "natlab-integrationtest"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
on:
pull_request:
paths:
- "tstest/integration/nat/nat_test.go"
jobs:
natlab-integrationtest:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install qemu
run: |
sudo rm /var/lib/man-db/auto-update
sudo apt-get -y update
sudo apt-get -y remove man-db
sudo apt-get install -y qemu-system-x86 qemu-utils
- name: Run natlab integration tests
run: |
./tool/go test -v -run=^TestEasyEasy$ -timeout=3m -count=1 ./tstest/integration/nat --run-vm-tests

@ -1,29 +0,0 @@
# Pin images used in github actions to a hash instead of a version tag.
name: pin-github-actions
on:
pull_request:
branches:
- main
paths:
- ".github/workflows/**"
workflow_dispatch:
permissions:
contents: read
pull-requests: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run:
name: pin-github-actions
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: pin
run: make pin-github-actions
- name: check for changed workflow files
run: git diff --no-ext-diff --exit-code .github/workflows || (echo "Some github actions versions need pinning, run make pin-github-actions."; exit 1)

@ -1,32 +0,0 @@
name: request-dataplane-review
on:
pull_request:
types: [ opened, synchronize, reopened, ready_for_review ]
paths:
- ".github/workflows/request-dataplane-review.yml"
- "**/*derp*"
- "**/derp*/**"
- "!**/depaware.txt"
jobs:
request-dataplane-review:
if: github.event.pull_request.draft == false
name: Request Dataplane Review
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get access token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2.0.6
id: generate-token
with:
# Get token for app: https://github.com/apps/change-visibility-bot
app-id: ${{ secrets.VISIBILITY_BOT_APP_ID }}
private-key: ${{ secrets.VISIBILITY_BOT_APP_PRIVATE_KEY }}
- name: Add reviewers
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
url: ${{ github.event.pull_request.html_url }}
run: |
gh pr edit "$url" --add-reviewer tailscale/dataplane

@ -3,7 +3,7 @@
name: "ssh-integrationtest" name: "ssh-integrationtest"
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
on: on:
@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Run SSH integration tests - name: Run SSH integration tests
run: | run: |
make sshintegrationtest make sshintegrationtest

@ -15,10 +15,6 @@ env:
# - false: we expect fuzzing to be happy, and should report failure if it's not. # - false: we expect fuzzing to be happy, and should report failure if it's not.
# - true: we expect fuzzing is broken, and should report failure if it start working. # - true: we expect fuzzing is broken, and should report failure if it start working.
TS_FUZZ_CURRENTLY_BROKEN: false TS_FUZZ_CURRENTLY_BROKEN: false
# GOMODCACHE is the same definition on all OSes. Within the workspace, we use
# toplevel directories "src" (for the checked out source code), and "gomodcache"
# and other caches as siblings to follow.
GOMODCACHE: ${{ github.workspace }}/gomodcache
on: on:
push: push:
@ -42,42 +38,8 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
gomod-cache:
runs-on: ubuntu-24.04
outputs:
cache-key: ${{ steps.hash.outputs.key }}
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Compute cache key from go.{mod,sum}
id: hash
run: echo "key=gomod-cross3-${{ hashFiles('src/go.mod', 'src/go.sum') }}" >> $GITHUB_OUTPUT
# See if the cache entry already exists to avoid downloading it
# and doing the cache write again.
- id: check-cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4
with:
path: gomodcache # relative to workspace; see env note at top of file
key: ${{ steps.hash.outputs.key }}
lookup-only: true
enableCrossOsArchive: true
- name: Download modules
if: steps.check-cache.outputs.cache-hit != 'true'
working-directory: src
run: go mod download
- name: Cache Go modules
if: steps.check-cache.outputs.cache-hit != 'true'
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache # relative to workspace; see env note at top of file
key: ${{ steps.hash.outputs.key }}
enableCrossOsArchive: true
race-root-integration: race-root-integration:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
strategy: strategy:
fail-fast: false # don't abort the entire matrix if one element fails fail-fast: false # don't abort the entire matrix if one element fails
matrix: matrix:
@ -88,20 +50,10 @@ jobs:
- shard: '4/4' - shard: '4/4'
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: build test wrapper - name: build test wrapper
working-directory: src
run: ./tool/go build -o /tmp/testwrapper ./cmd/testwrapper run: ./tool/go build -o /tmp/testwrapper ./cmd/testwrapper
- name: integration tests as root - name: integration tests as root
working-directory: src
run: PATH=$PWD/tool:$PATH /tmp/testwrapper -exec "sudo -E" -race ./tstest/integration/ run: PATH=$PWD/tool:$PATH /tmp/testwrapper -exec "sudo -E" -race ./tstest/integration/
env: env:
TS_TEST_SHARD: ${{ matrix.shard }} TS_TEST_SHARD: ${{ matrix.shard }}
@ -112,6 +64,7 @@ jobs:
matrix: matrix:
include: include:
- goarch: amd64 - goarch: amd64
coverflags: "-coverprofile=/tmp/coverage.out"
- goarch: amd64 - goarch: amd64
buildflags: "-race" buildflags: "-race"
shard: '1/3' shard: '1/3'
@ -122,44 +75,36 @@ jobs:
buildflags: "-race" buildflags: "-race"
shard: '3/3' shard: '3/3'
- goarch: "386" # thanks yaml - goarch: "386" # thanks yaml
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache - name: Restore Cache
id: restore-cache uses: actions/cache@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
# Note: this is only restoring the build cache. Mod cache is shared amongst # Note: unlike the other setups, this is only grabbing the mod download
# all jobs in the workflow. # cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: | path: |
~/.cache/go-build ~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build ~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }} # The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: | restore-keys: |
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-${{ hashFiles('**/go.sum') }}
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-2-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-
${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-
- name: build all - name: build all
if: matrix.buildflags == '' # skip on race builder if: matrix.buildflags == '' # skip on race builder
working-directory: src
run: ./tool/go build ${{matrix.buildflags}} ./... run: ./tool/go build ${{matrix.buildflags}} ./...
env: env:
GOARCH: ${{ matrix.goarch }} GOARCH: ${{ matrix.goarch }}
- name: build variant CLIs - name: build variant CLIs
if: matrix.buildflags == '' # skip on race builder if: matrix.buildflags == '' # skip on race builder
working-directory: src
run: | run: |
export TS_USE_TOOLCHAIN=1
./build_dist.sh --extra-small ./cmd/tailscaled ./build_dist.sh --extra-small ./cmd/tailscaled
./build_dist.sh --box ./cmd/tailscaled ./build_dist.sh --box ./cmd/tailscaled
./build_dist.sh --extra-small --box ./cmd/tailscaled ./build_dist.sh --extra-small --box ./cmd/tailscaled
@ -172,24 +117,24 @@ jobs:
sudo apt-get -y update sudo apt-get -y update
sudo apt-get -y install qemu-user sudo apt-get -y install qemu-user
- name: build test wrapper - name: build test wrapper
working-directory: src
run: ./tool/go build -o /tmp/testwrapper ./cmd/testwrapper run: ./tool/go build -o /tmp/testwrapper ./cmd/testwrapper
- name: test all - name: test all
working-directory: src run: NOBASHDEBUG=true PATH=$PWD/tool:$PATH /tmp/testwrapper ${{matrix.coverflags}} ./... ${{matrix.buildflags}}
run: NOBASHDEBUG=true NOPWSHDEBUG=true PATH=$PWD/tool:$PATH /tmp/testwrapper ./... ${{matrix.buildflags}}
env: env:
GOARCH: ${{ matrix.goarch }} GOARCH: ${{ matrix.goarch }}
TS_TEST_SHARD: ${{ matrix.shard }} TS_TEST_SHARD: ${{ matrix.shard }}
- name: Publish to coveralls.io
if: matrix.coverflags != '' # only publish results if we've tracked coverage
uses: shogo82148/actions-goveralls@v1
with:
path-to-profile: /tmp/coverage.out
- name: bench all - name: bench all
working-directory: src
run: ./tool/go test ${{matrix.buildflags}} -bench=. -benchtime=1x -run=^$ $(for x in $(git grep -l "^func Benchmark" | xargs dirname | sort | uniq); do echo "./$x"; done) run: ./tool/go test ${{matrix.buildflags}} -bench=. -benchtime=1x -run=^$ $(for x in $(git grep -l "^func Benchmark" | xargs dirname | sort | uniq); do echo "./$x"; done)
env: env:
GOARCH: ${{ matrix.goarch }} GOARCH: ${{ matrix.goarch }}
- name: check that no tracked files changed - name: check that no tracked files changed
working-directory: src
run: git diff --no-ext-diff --name-only --exit-code || (echo "Build/test modified the files above."; exit 1) run: git diff --no-ext-diff --name-only --exit-code || (echo "Build/test modified the files above."; exit 1)
- name: check that no new files were added - name: check that no new files were added
working-directory: src
run: | run: |
# Note: The "error: pathspec..." you see below is normal! # Note: The "error: pathspec..." you see below is normal!
# In the success case in which there are no new untracked files, # In the success case in which there are no new untracked files,
@ -200,155 +145,82 @@ jobs:
echo "Build/test created untracked files in the repo (file names above)." echo "Build/test created untracked files in the repo (file names above)."
exit 1 exit 1
fi fi
- name: Tidy cache
working-directory: src
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-${{ matrix.shard }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
windows: windows:
permissions: runs-on: windows-2022
id-token: write # This is required for requesting the GitHub action identity JWT that can auth to cigocached
contents: read # This is required for actions/checkout
# ci-windows-github-1 is a 2022 GitHub-managed runner in our org with 8 cores
# and 32 GB of RAM. It is connected to a private Azure VNet that hosts cigocached.
# https://github.com/organizations/tailscale/settings/actions/github-hosted-runners/5
runs-on: ci-windows-github-1
needs: gomod-cache
name: Windows (${{ matrix.name || matrix.shard}})
strategy:
fail-fast: false # don't abort the entire matrix if one element fails
matrix:
include:
- key: "win-bench"
name: "benchmarks"
- key: "win-shard-1-2"
shard: "1/2"
- key: "win-shard-2-2"
shard: "2/2"
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: ${{ github.workspace }}/src
- name: Install Go - name: Install Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with: with:
go-version-file: ${{ github.workspace }}/src/go.mod go-version-file: go.mod
cache: false cache: false
- name: Restore Go module cache - name: Restore Cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4 uses: actions/cache@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Set up cigocacher
id: cigocacher-setup
uses: ./src/.github/actions/go-cache
with: with:
checkout-path: ${{ github.workspace }}/src # Note: unlike the other setups, this is only grabbing the mod download
cache-dir: ${{ github.workspace }}/cigocacher # cache, rather than the whole mod directory, as the download cache
cigocached-url: ${{ vars.CIGOCACHED_AZURE_URL }} # contains zips that can be unpacked in parallel faster than they can be
cigocached-host: ${{ vars.CIGOCACHED_AZURE_HOST }} # fetched and extracted by tar
path: |
~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build
# The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-go-2-
- name: test - name: test
if: matrix.key != 'win-bench' # skip on bench builder run: go run ./cmd/testwrapper ./...
working-directory: src
run: go run ./cmd/testwrapper sharded:${{ matrix.shard }}
- name: bench all - name: bench all
if: matrix.key == 'win-bench'
working-directory: src
# Don't use -bench=. -benchtime=1x. # Don't use -bench=. -benchtime=1x.
# Somewhere in the layers (powershell?) # Somewhere in the layers (powershell?)
# the equals signs cause great confusion. # the equals signs cause great confusion.
run: go test ./... -bench . -benchtime 1x -run "^$" run: go test ./... -bench . -benchtime 1x -run "^$"
- name: Print stats
shell: pwsh
if: steps.cigocacher-setup.outputs.success == 'true'
env:
GOCACHEPROG: ${{ env.GOCACHEPROG }}
run: |
Invoke-Expression "$env:GOCACHEPROG --stats" | jq .
win-tool-go:
runs-on: windows-latest
needs: gomod-cache
name: Windows (win-tool-go)
steps:
- name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: test-tool-go
working-directory: src
run: ./tool/go version
privileged: privileged:
needs: gomod-cache runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
container: container:
image: golang:latest image: golang:latest
options: --privileged options: --privileged
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: chown - name: chown
working-directory: src
run: chown -R $(id -u):$(id -g) $PWD run: chown -R $(id -u):$(id -g) $PWD
- name: privileged tests - name: privileged tests
working-directory: src
run: ./tool/go test ./util/linuxfw ./derp/xdp run: ./tool/go test ./util/linuxfw ./derp/xdp
vm: vm:
needs: gomod-cache
runs-on: ["self-hosted", "linux", "vm"] runs-on: ["self-hosted", "linux", "vm"]
# VM tests run with some privileges, don't let them run on 3p PRs. # VM tests run with some privileges, don't let them run on 3p PRs.
if: github.repository == 'tailscale/tailscale' if: github.repository == 'tailscale/tailscale'
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Run VM tests - name: Run VM tests
working-directory: src run: ./tool/go test ./tstest/integration/vms -v -no-s3 -run-vm-tests -run=TestRunUbuntu2004
run: ./tool/go test ./tstest/integration/vms -v -no-s3 -run-vm-tests -run=TestRunUbuntu2404
env: env:
HOME: "/var/lib/ghrunner/home" HOME: "/var/lib/ghrunner/home"
TMPDIR: "/tmp" TMPDIR: "/tmp"
XDG_CACHE_HOME: "/var/lib/ghrunner/cache" XDG_CACHE_HOME: "/var/lib/ghrunner/cache"
race-build:
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: build all
run: ./tool/go install -race ./cmd/...
- name: build tests
run: ./tool/go test -race -exec=true ./...
cross: # cross-compile checks, build only. cross: # cross-compile checks, build only.
needs: gomod-cache
strategy: strategy:
fail-fast: false # don't abort the entire matrix if one element fails fail-fast: false # don't abort the entire matrix if one element fails
matrix: matrix:
@ -383,34 +255,28 @@ jobs:
- goos: openbsd - goos: openbsd
goarch: amd64 goarch: amd64
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache - name: Restore Cache
id: restore-cache uses: actions/cache@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
# Note: this is only restoring the build cache. Mod cache is shared amongst # Note: unlike the other setups, this is only grabbing the mod download
# all jobs in the workflow. # cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: | path: |
~/.cache/go-build ~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build ~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }} # The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: | restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-
- name: build all - name: build all
working-directory: src
run: ./tool/go build ./cmd/... run: ./tool/go build ./cmd/...
env: env:
GOOS: ${{ matrix.goos }} GOOS: ${{ matrix.goos }}
@ -418,53 +284,25 @@ jobs:
GOARM: ${{ matrix.goarm }} GOARM: ${{ matrix.goarm }}
CGO_ENABLED: "0" CGO_ENABLED: "0"
- name: build tests - name: build tests
working-directory: src
run: ./tool/go test -exec=true ./... run: ./tool/go test -exec=true ./...
env: env:
GOOS: ${{ matrix.goos }} GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }} GOARCH: ${{ matrix.goarch }}
CGO_ENABLED: "0" CGO_ENABLED: "0"
- name: Tidy cache
working-directory: src
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-${{ matrix.goarm }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
ios: # similar to cross above, but iOS can't build most of the repo. So, just ios: # similar to cross above, but iOS can't build most of the repo. So, just
# make it build a few smoke packages. #make it build a few smoke packages.
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: build some - name: build some
working-directory: src run: ./tool/go build ./ipn/... ./wgengine/ ./types/... ./control/controlclient
run: ./tool/go build ./ipn/... ./ssh/tailssh ./wgengine/ ./types/... ./control/controlclient
env: env:
GOOS: ios GOOS: ios
GOARCH: arm64 GOARCH: arm64
crossmin: # cross-compile for platforms where we only check cmd/tailscale{,d} crossmin: # cross-compile for platforms where we only check cmd/tailscale{,d}
needs: gomod-cache
strategy: strategy:
fail-fast: false # don't abort the entire matrix if one element fails fail-fast: false # don't abort the entire matrix if one element fails
matrix: matrix:
@ -475,164 +313,93 @@ jobs:
# AIX # AIX
- goos: aix - goos: aix
goarch: ppc64 goarch: ppc64
# Solaris
- goos: solaris
goarch: amd64
# illumos
- goos: illumos
goarch: amd64
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache - name: Restore Cache
id: restore-cache uses: actions/cache@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
# Note: this is only restoring the build cache. Mod cache is shared amongst # Note: unlike the other setups, this is only grabbing the mod download
# all jobs in the workflow. # cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: | path: |
~/.cache/go-build ~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build ~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }} # The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: | restore-keys: |
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-${{ hashFiles('**/go.sum') }}
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}- ${{ github.job }}-${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-2-
${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-
- name: build core - name: build core
working-directory: src
run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled run: ./tool/go build ./cmd/tailscale ./cmd/tailscaled
env: env:
GOOS: ${{ matrix.goos }} GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }} GOARCH: ${{ matrix.goarch }}
GOARM: ${{ matrix.goarm }} GOARM: ${{ matrix.goarm }}
CGO_ENABLED: "0" CGO_ENABLED: "0"
- name: Tidy cache
working-directory: src
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-${{ matrix.goos }}-${{ matrix.goarch }}-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
android: android:
# similar to cross above, but android fails to build a few pieces of the # similar to cross above, but android fails to build a few pieces of the
# repo. We should fix those pieces, they're small, but as a stepping stone, # repo. We should fix those pieces, they're small, but as a stepping stone,
# only test the subset of android that our past smoke test checked. # only test the subset of android that our past smoke test checked.
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
# Super minimal Android build that doesn't even use CGO and doesn't build everything that's needed # Super minimal Android build that doesn't even use CGO and doesn't build everything that's needed
# and is only arm64. But it's a smoke build: it's not meant to catch everything. But it'll catch # and is only arm64. But it's a smoke build: it's not meant to catch everything. But it'll catch
# some Android breakages early. # some Android breakages early.
# TODO(bradfitz): better; see https://github.com/tailscale/tailscale/issues/4482 # TODO(bradfitz): better; see https://github.com/tailscale/tailscale/issues/4482
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: build some - name: build some
working-directory: src run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/netmon ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version
run: ./tool/go install ./net/netns ./ipn/ipnlocal ./wgengine/magicsock/ ./wgengine/ ./wgengine/router/ ./wgengine/netstack ./util/dnsname/ ./ipn/ ./net/netmon ./wgengine/router/ ./tailcfg/ ./types/logger/ ./net/dns ./hostinfo ./version ./ssh/tailssh
env: env:
GOOS: android GOOS: android
GOARCH: arm64 GOARCH: arm64
wasm: # builds tsconnect, which is the only wasm build we support wasm: # builds tsconnect, which is the only wasm build we support
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: Restore Cache - name: Restore Cache
id: restore-cache uses: actions/cache@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with: with:
# Note: this is only restoring the build cache. Mod cache is shared amongst # Note: unlike the other setups, this is only grabbing the mod download
# all jobs in the workflow. # cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: | path: |
~/.cache/go-build ~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build ~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }} # The -2- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: | restore-keys: |
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}- ${{ github.job }}-${{ runner.os }}-go-2-${{ hashFiles('**/go.sum') }}
${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}- ${{ github.job }}-${{ runner.os }}-go-2-
${{ runner.os }}-js-wasm-go-
- name: build tsconnect client - name: build tsconnect client
working-directory: src
run: ./tool/go build ./cmd/tsconnect/wasm ./cmd/tailscale/cli run: ./tool/go build ./cmd/tsconnect/wasm ./cmd/tailscale/cli
env: env:
GOOS: js GOOS: js
GOARCH: wasm GOARCH: wasm
- name: build tsconnect server - name: build tsconnect server
working-directory: src
# Note, no GOOS/GOARCH in env on this build step, we're running a build # Note, no GOOS/GOARCH in env on this build step, we're running a build
# tool that handles the build itself. # tool that handles the build itself.
run: | run: |
./tool/go run ./cmd/tsconnect --fast-compression build ./tool/go run ./cmd/tsconnect --fast-compression build
./tool/go run ./cmd/tsconnect --fast-compression build-pkg ./tool/go run ./cmd/tsconnect --fast-compression build-pkg
- name: Tidy cache
working-directory: src
shell: bash
run: |
find $(go env GOCACHE) -type f -mmin +90 -delete
- name: Save Cache
# Save cache even on failure, but only on cache miss and main branch to avoid thrashing.
if: always() && steps.restore-cache.outputs.cache-hit != 'true' && github.ref == 'refs/heads/main'
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
# Note: this is only saving the build cache. Mod cache is shared amongst
# all jobs in the workflow.
path: |
~/.cache/go-build
~\AppData\Local\go-build
key: ${{ runner.os }}-js-wasm-go-${{ hashFiles('**/go.sum') }}-${{ github.job }}-${{ github.run_id }}
tailscale_go: # Subset of tests that depend on our custom Go toolchain. tailscale_go: # Subset of tests that depend on our custom Go toolchain.
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Set GOMODCACHE env
run: echo "GOMODCACHE=$HOME/.cache/go-mod" >> $GITHUB_ENV
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: test tailscale_go - name: test tailscale_go
run: ./tool/go test -tags=tailscale_go,ts_enable_sockstats ./net/sockstats/... run: ./tool/go test -tags=tailscale_go,ts_enable_sockstats ./net/sockstats/...
@ -649,13 +416,11 @@ jobs:
# explicit 'if' condition, because the default condition for steps is # explicit 'if' condition, because the default condition for steps is
# 'success()', meaning "only run this if no previous steps failed". # 'success()', meaning "only run this if no previous steps failed".
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
- name: build fuzzers - name: build fuzzers
id: build id: build
# As of 21 October 2025, this repo doesn't tag releases, so this commit uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@master
# hash is just the tip of master.
uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@1242ccb5b6352601e73c00f189ac2ae397242264
# continue-on-error makes steps.build.conclusion be 'success' even if # continue-on-error makes steps.build.conclusion be 'success' even if
# steps.build.outcome is 'failure'. This means this step does not # steps.build.outcome is 'failure'. This means this step does not
# contribute to the job's overall pass/fail evaluation. # contribute to the job's overall pass/fail evaluation.
@ -685,12 +450,10 @@ jobs:
# report a failure because TS_FUZZ_CURRENTLY_BROKEN is set to the wrong # report a failure because TS_FUZZ_CURRENTLY_BROKEN is set to the wrong
# value. # value.
if: steps.build.outcome == 'success' if: steps.build.outcome == 'success'
# As of 21 October 2025, this repo doesn't tag releases, so this commit uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@master
# hash is just the tip of master.
uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@1242ccb5b6352601e73c00f189ac2ae397242264
with: with:
oss-fuzz-project-name: 'tailscale' oss-fuzz-project-name: 'tailscale'
fuzz-seconds: 150 fuzz-seconds: 300
dry-run: false dry-run: false
language: go language: go
- name: Set artifacts_path in env (workaround for actions/upload-artifact#176) - name: Set artifacts_path in env (workaround for actions/upload-artifact#176)
@ -698,154 +461,78 @@ jobs:
run: | run: |
echo "artifacts_path=$(realpath .)" >> $GITHUB_ENV echo "artifacts_path=$(realpath .)" >> $GITHUB_ENV
- name: upload crash - name: upload crash
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
if: steps.run.outcome != 'success' && steps.build.outcome == 'success' if: steps.run.outcome != 'success' && steps.build.outcome == 'success'
with: with:
name: artifacts name: artifacts
path: ${{ env.artifacts_path }}/out/artifacts path: ${{ env.artifacts_path }}/out/artifacts
depaware: depaware:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Set GOMODCACHE env
run: echo "GOMODCACHE=$HOME/.cache/go-mod" >> $GITHUB_ENV
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: check depaware - name: check depaware
working-directory: src run: |
run: make depaware export PATH=$(./tool/go env GOROOT)/bin:$PATH
find . -name 'depaware.txt' | xargs -n1 dirname | xargs ./tool/go run github.com/tailscale/depaware --check
go_generate: go_generate:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: check that 'go generate' is clean - name: check that 'go generate' is clean
working-directory: src
run: | run: |
pkgs=$(./tool/go list ./... | grep -Ev 'dnsfallback|k8s-operator|xdp') pkgs=$(./tool/go list ./... | grep -Ev 'dnsfallback|k8s-operator|xdp')
./tool/go generate $pkgs ./tool/go generate $pkgs
git add -N . # ensure untracked files are noticed
echo echo
echo echo
git diff --name-only --exit-code || (echo "The files above need updating. Please run 'go generate'."; exit 1) git diff --name-only --exit-code || (echo "The files above need updating. Please run 'go generate'."; exit 1)
go_mod_tidy: go_mod_tidy:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: check that 'go mod tidy' is clean - name: check that 'go mod tidy' is clean
working-directory: src
run: | run: |
make tidy ./tool/go mod tidy
echo echo
echo echo
git diff --name-only --exit-code || (echo "Please run 'make tidy'"; exit 1) git diff --name-only --exit-code || (echo "Please run 'go mod tidy'."; exit 1)
licenses: licenses:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
path: src
- name: Restore Go module cache
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: check licenses - name: check licenses
working-directory: src run: ./scripts/check_license_headers.sh .
run: |
grep -q TestLicenseHeaders *.go || (echo "Expected a test named TestLicenseHeaders"; exit 1)
./tool/go test -v -run=TestLicenseHeaders
staticcheck: staticcheck:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: gomod-cache
name: staticcheck (${{ matrix.name }})
strategy: strategy:
fail-fast: false # don't abort the entire matrix if one element fails fail-fast: false # don't abort the entire matrix if one element fails
matrix: matrix:
goos: ["linux", "windows", "darwin"]
goarch: ["amd64"]
include: include:
- name: "macOS" - goos: "windows"
goos: "darwin" goarch: "386"
goarch: "arm64"
flags: "--with-tags-all=darwin"
- name: "Windows"
goos: "windows"
goarch: "amd64"
flags: "--with-tags-all=windows"
- name: "Linux"
goos: "linux"
goarch: "amd64"
flags: "--with-tags-all=linux"
- name: "Portable (1/4)"
goos: "linux"
goarch: "amd64"
flags: "--without-tags-any=windows,darwin,linux --shard=1/4"
- name: "Portable (2/4)"
goos: "linux"
goarch: "amd64"
flags: "--without-tags-any=windows,darwin,linux --shard=2/4"
- name: "Portable (3/4)"
goos: "linux"
goarch: "amd64"
flags: "--without-tags-any=windows,darwin,linux --shard=3/4"
- name: "Portable (4/4)"
goos: "linux"
goarch: "amd64"
flags: "--without-tags-any=windows,darwin,linux --shard=4/4"
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with: - name: install staticcheck
path: src run: GOBIN=~/.local/bin ./tool/go install honnef.co/go/tools/cmd/staticcheck
- name: Restore Go module cache - name: run staticcheck
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: gomodcache
key: ${{ needs.gomod-cache.outputs.cache-key }}
enableCrossOsArchive: true
- name: run staticcheck (${{ matrix.name }})
working-directory: src
run: | run: |
export GOROOT=$(./tool/go env GOROOT) export GOROOT=$(./tool/go env GOROOT)
./tool/go run -exec \ export PATH=$GOROOT/bin:$PATH
"env GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }}" \ staticcheck -- $(./tool/go list ./... | grep -v tempfork)
honnef.co/go/tools/cmd/staticcheck -- \ env:
$(./tool/go run ./tool/listpkgs --ignore-3p --goos=${{ matrix.goos }} --goarch=${{ matrix.goarch }} ${{ matrix.flags }} ./...) GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
notify_slack: notify_slack:
if: always() if: always()
@ -865,7 +552,7 @@ jobs:
- go_mod_tidy - go_mod_tidy
- licenses - licenses
- staticcheck - staticcheck
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
- name: notify - name: notify
# Only notify slack for merged commits, not PR failures. # Only notify slack for merged commits, not PR failures.
@ -876,10 +563,8 @@ jobs:
# By having the job always run, but skipping its only step as needed, we # By having the job always run, but skipping its only step as needed, we
# let the CI output collapse nicely in PRs. # let the CI output collapse nicely in PRs.
if: failure() && github.event_name == 'push' if: failure() && github.event_name == 'push'
uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1 uses: slackapi/slack-github-action@37ebaef184d7626c5f204ab8d3baff4262dd30f0 # v1.27.0
with: with:
webhook: ${{ secrets.SLACK_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: | payload: |
{ {
"attachments": [{ "attachments": [{
@ -891,10 +576,13 @@ jobs:
"color": "danger" "color": "danger"
}] }]
} }
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
merge_blocker: check_mergeability:
if: always() if: always()
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: needs:
- android - android
- test - test
@ -916,46 +604,3 @@ jobs:
uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe # v1.2.2 uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe # v1.2.2
with: with:
jobs: ${{ toJSON(needs) }} jobs: ${{ toJSON(needs) }}
# This waits on all the jobs which must never fail. Branch protection rules
# enforce these. No flaky tests are allowed in these jobs. (We don't want flaky
# tests anywhere, really, but a flaky test here prevents merging.)
check_mergeability_strict:
if: always()
runs-on: ubuntu-24.04
needs:
- android
- cross
- crossmin
- ios
- tailscale_go
- depaware
- go_generate
- go_mod_tidy
- licenses
- staticcheck
steps:
- name: Decide if change is okay to merge
if: github.event_name != 'push'
uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe # v1.2.2
with:
jobs: ${{ toJSON(needs) }}
check_mergeability:
if: always()
runs-on: ubuntu-24.04
needs:
- check_mergeability_strict
- test
- windows
- vm
- wasm
- fuzz
- race-root-integration
- privileged
steps:
- name: Decide if change is okay to merge
if: github.event_name != 'push'
uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe # v1.2.2
with:
jobs: ${{ toJSON(needs) }}

@ -8,11 +8,11 @@ on:
- main - main
paths: paths:
- go.mod - go.mod
- .github/workflows/update-flake.yml - .github/workflows/update-flakes.yml
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -21,21 +21,22 @@ jobs:
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Run update-flakes - name: Run update-flakes
run: ./update-flake.sh run: ./update-flake.sh
- name: Get access token - name: Get access token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2.0.6 uses: tibdex/github-app-token@3beb63f4bd073e61482598c45c71c1019b59b73a # v2.1.0
id: generate-token id: generate-token
with: with:
# Get token for app: https://github.com/apps/tailscale-code-updater app_id: ${{ secrets.LICENSING_APP_ID }}
app-id: ${{ secrets.CODE_UPDATER_APP_ID }} installation_retrieval_mode: "id"
private-key: ${{ secrets.CODE_UPDATER_APP_PRIVATE_KEY }} installation_retrieval_payload: ${{ secrets.LICENSING_APP_INSTALLATION_ID }}
private_key: ${{ secrets.LICENSING_APP_PRIVATE_KEY }}
- name: Send pull request - name: Send pull request
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 #v8.0.0 uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f #v7.0.5
with: with:
token: ${{ steps.generate-token.outputs.token }} token: ${{ steps.generate-token.outputs.token }}
author: Flakes Updater <noreply+flakes-updater@tailscale.com> author: Flakes Updater <noreply+flakes-updater@tailscale.com>

@ -5,7 +5,7 @@ on:
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -14,7 +14,7 @@ jobs:
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Run go get - name: Run go get
run: | run: |
@ -23,16 +23,19 @@ jobs:
./tool/go mod tidy ./tool/go mod tidy
- name: Get access token - name: Get access token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2.0.6 uses: tibdex/github-app-token@3beb63f4bd073e61482598c45c71c1019b59b73a # v2.1.0
id: generate-token id: generate-token
with: with:
# Get token for app: https://github.com/apps/tailscale-code-updater # TODO(will): this should use the code updater app rather than licensing.
app-id: ${{ secrets.CODE_UPDATER_APP_ID }} # It has the same permissions, so not a big deal, but still.
private-key: ${{ secrets.CODE_UPDATER_APP_PRIVATE_KEY }} app_id: ${{ secrets.LICENSING_APP_ID }}
installation_retrieval_mode: "id"
installation_retrieval_payload: ${{ secrets.LICENSING_APP_INSTALLATION_ID }}
private_key: ${{ secrets.LICENSING_APP_PRIVATE_KEY }}
- name: Send pull request - name: Send pull request
id: pull-request id: pull-request
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 #v8.0.0 uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f #v7.0.5
with: with:
token: ${{ steps.generate-token.outputs.token }} token: ${{ steps.generate-token.outputs.token }}
author: OSS Updater <noreply+oss-updater@tailscale.com> author: OSS Updater <noreply+oss-updater@tailscale.com>

@ -1,38 +0,0 @@
name: tailscale.com/cmd/vet
env:
HOME: ${{ github.workspace }}
# GOMODCACHE is the same definition on all OSes. Within the workspace, we use
# toplevel directories "src" (for the checked out source code), and "gomodcache"
# and other caches as siblings to follow.
GOMODCACHE: ${{ github.workspace }}/gomodcache
on:
push:
branches:
- main
- "release-branch/*"
paths:
- "**.go"
pull_request:
paths:
- "**.go"
jobs:
vet:
runs-on: [ self-hosted, linux ]
timeout-minutes: 5
steps:
- name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: src
- name: Build 'go vet' tool
working-directory: src
run: ./tool/go build -o /tmp/vettool tailscale.com/cmd/vet
- name: Run 'go vet'
working-directory: src
run: ./tool/go vet -vettool=/tmp/vettool tailscale.com/...

@ -3,6 +3,8 @@ on:
workflow_dispatch: workflow_dispatch:
# For now, only run on requests, not the main branches. # For now, only run on requests, not the main branches.
pull_request: pull_request:
branches:
- "*"
paths: paths:
- "client/web/**" - "client/web/**"
- ".github/workflows/webclient.yml" - ".github/workflows/webclient.yml"
@ -13,7 +15,7 @@ on:
# - main # - main
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
@ -22,7 +24,7 @@ jobs:
steps: steps:
- name: Check out code - name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Install deps - name: Install deps
run: ./tool/yarn --cwd client/web run: ./tool/yarn --cwd client/web
- name: Run lint - name: Run lint

3
.gitignore vendored

@ -49,6 +49,3 @@ client/web/build/assets
*.xcworkspacedata *.xcworkspacedata
/tstest/tailmac/bin /tstest/tailmac/bin
/tstest/tailmac/build /tstest/tailmac/build
# Ignore personal IntelliJ settings
.idea/

@ -1,110 +1,104 @@
version: "2"
# Configuration for how we run golangci-lint
# Timeout of 5m was the default in v1.
run:
timeout: 5m
linters: linters:
# Don't enable any linters by default; just the ones that we explicitly # Don't enable any linters by default; just the ones that we explicitly
# enable in the list below. # enable in the list below.
default: none disable-all: true
enable: enable:
- bidichk - bidichk
- gofmt
- goimports
- govet - govet
- misspell - misspell
- revive - revive
settings:
# Configuration for how we run golangci-lint
run:
timeout: 5m
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# These are forks of an upstream package and thus are exempt from stylistic
# changes that would make pulling in upstream changes harder.
- path: tempfork/.*\.go
text: "File is not `gofmt`-ed with `-s` `-r 'interface{} -> any'`"
- path: util/singleflight/.*\.go
text: "File is not `gofmt`-ed with `-s` `-r 'interface{} -> any'`"
# Per-linter settings are contained in this top-level key
linters-settings:
# Enable all rules by default; we don't use invisible unicode runes.
bidichk:
gofmt:
rewrite-rules:
- pattern: 'interface{}'
replacement: 'any'
goimports:
govet:
# Matches what we use in corp as of 2023-12-07 # Matches what we use in corp as of 2023-12-07
govet: enable:
enable: - asmdecl
- asmdecl - assign
- assign - atomic
- atomic - bools
- bools - buildtag
- buildtag - cgocall
- cgocall - copylocks
- copylocks - deepequalerrors
- deepequalerrors - errorsas
- errorsas - framepointer
- framepointer - httpresponse
- httpresponse - ifaceassert
- ifaceassert - loopclosure
- loopclosure - lostcancel
- lostcancel - nilfunc
- nilfunc - nilness
- nilness - printf
- printf - reflectvaluecompare
- reflectvaluecompare - shift
- shift - sigchanyzer
- sigchanyzer - sortslice
- sortslice - stdmethods
- stdmethods - stringintconv
- stringintconv - structtag
- structtag - testinggoroutine
- testinggoroutine - tests
- tests - unmarshal
- unmarshal - unreachable
- unreachable - unsafeptr
- unsafeptr - unusedresult
- unusedresult settings:
settings: printf:
printf: # List of print function names to check (in addition to default)
# List of print function names to check (in addition to default) funcs:
funcs: - github.com/tailscale/tailscale/types/logger.Discard
- github.com/tailscale/tailscale/types/logger.Discard # NOTE(andrew-d): this doesn't currently work because the printf
# NOTE(andrew-d): this doesn't currently work because the printf # analyzer doesn't support type declarations
# analyzer doesn't support type declarations #- github.com/tailscale/tailscale/types/logger.Logf
#- github.com/tailscale/tailscale/types/logger.Logf
revive: misspell:
enable-all-rules: false
rules: revive:
- name: atomic enable-all-rules: false
- name: context-keys-type ignore-generated-header: true
- name: defer
arguments: [[
# Calling 'recover' at the time a defer is registered (i.e. "defer recover()") has no effect.
"immediate-recover",
# Calling 'recover' outside of a deferred function has no effect
"recover",
# Returning values from a deferred function has no effect
"return",
]]
- name: duplicated-imports
- name: errorf
- name: string-of-int
- name: time-equal
- name: unconditional-recursion
- name: useless-break
- name: waitgroup-by-value
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules: rules:
# These are forks of an upstream package and thus are exempt from stylistic - name: atomic
# changes that would make pulling in upstream changes harder. - name: context-keys-type
- path: tempfork/.*\.go - name: defer
text: File is not `gofmt`-ed with `-s` `-r 'interface{} -> any'` arguments: [[
- path: util/singleflight/.*\.go # Calling 'recover' at the time a defer is registered (i.e. "defer recover()") has no effect.
text: File is not `gofmt`-ed with `-s` `-r 'interface{} -> any'` "immediate-recover",
paths: # Calling 'recover' outside of a deferred function has no effect
- third_party$ "recover",
- builtin$ # Returning values from a deferred function has no effect
- examples$ "return",
formatters: ]]
enable: - name: duplicated-imports
- gofmt - name: errorf
- goimports - name: string-of-int
settings: - name: time-equal
gofmt: - name: unconditional-recursion
rewrite-rules: - name: useless-break
- pattern: interface{} - name: waitgroup-by-value
replacement: any
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

@ -1 +1 @@
3.22 3.18

@ -1,103 +1,135 @@
# Tailscale Community Code of Conduct # Contributor Covenant Code of Conduct
## Our Pledge ## Our Pledge
We are committed to creating an open, welcoming, diverse, inclusive, healthy and respectful community. We as members, contributors, and leaders pledge to make participation
Unacceptable, harmful and inappropriate behavior will not be tolerated. in our community a harassment-free experience for everyone, regardless
of age, body size, visible or invisible disability, ethnicity, sex
characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance,
race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open,
welcoming, diverse, inclusive, and healthy community.
## Our Standards ## Our Standards
Examples of behavior that contributes to a positive environment for our community include: Examples of behavior that contributes to a positive environment for
our community include:
- Demonstrating empathy and kindness toward other people. * Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences. * Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback. * Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience. * Accepting responsibility and apologizing to those affected by our
- Focusing on what is best not just for us as individuals, but for the overall community. mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include without limitation: Examples of unacceptable behavior include:
- The use of language, imagery or emojis (collectively "content") that is racist, sexist, homophobic, transphobic, or otherwise harassing or discriminatory based on any protected characteristic. * The use of sexualized language or imagery, and sexual attention or
- The use of sexualized content and sexual attention or advances of any kind. advances of any kind
- The use of violent, intimidating or bullying content. * Trolling, insulting or derogatory comments, and personal or
- Trolling, concern trolling, insulting or derogatory comments, and personal or political attacks. political attacks
- Public or private harassment. * Public or private harassment
- Publishing others' personal information, such as a photo, physical address, email address, online profile information, or other personal information, without their explicit permission or with the intent to bully or harass the other person. * Publishing others' private information, such as a physical or email
- Posting deep fake or other AI generated content about or involving another person without the explicit permission. address, without their explicit permission
- Spamming community channels and members, such as sending repeat messages, low-effort content, or automated messages. * Other conduct which could reasonably be considered inappropriate in
- Phishing or any similar activity. a professional setting
- Distributing or promoting malware.
- The use of any coded or suggestive content to hide or provoke otherwise unacceptable behavior.
- Other conduct which could reasonably be considered harmful, illegal, or inappropriate in a professional setting.
Please also see the Tailscale Acceptable Use Policy, available at [tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup). ## Enforcement Responsibilities
## Reporting Incidents Community leaders are responsible for clarifying and enforcing our
standards of acceptable behavior and will take appropriate and fair
corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to Tailscale directly via <info@tailscale.com>, or to the community leaders or moderators via DM or similar. Community leaders have the right and responsibility to remove, edit,
All complaints will be reviewed and investigated promptly and fairly. or reject comments, commits, code, wiki edits, issues, and other
We will respect the privacy and safety of the reporter of any issues. contributions that are not aligned to this Code of Conduct, and will
communicate reasons for moderation decisions when appropriate.
Please note that this community is not moderated by staff 24/7, and we do not have, and do not undertake, any obligation to prescreen, monitor, edit, or remove any content or data, or to actively seek facts or circumstances indicating illegal activity. ## Scope
While we strive to keep the community safe and welcoming, moderation may not be immediate at all hours.
If you encounter any issues, report them using the appropriate channels.
## Enforcement Guidelines This Code of Conduct applies within all community spaces, and also
applies when an individual is officially representing the community in
public spaces. Examples of representing our community include using an
official e-mail address, posting via an official social media account,
or acting as an appointed representative at an online or offline
event.
## Enforcement
Community leaders and moderators are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at [info@tailscale.com](mailto:info@tailscale.com). All complaints
will be reviewed and investigated promptly and fairly.
Community leaders and moderators have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Community Code of Conduct. All community leaders are obligated to respect the privacy and
Tailscale retains full discretion to take action (or not) in response to a violation of these guidelines with or without notice or liability to you. security of the reporter of any incident.
We will interpret our policies and resolve disputes in favor of protecting users, customers, the public, our community and our company, as a whole.
## Enforcement Guidelines
Community leaders will follow these community enforcement guidelines in determining the consequences for any action they deem in violation of this Code of Conduct, Community leaders will follow these Community Impact Guidelines in
and retain full discretion to apply the enforcement guidelines as necessary depending on the circumstances: determining the consequences for any action they deem in violation of
this Code of Conduct:
### 1. Correction ### 1. Correction
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Community Impact**: Use of inappropriate language or other behavior
deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. **Consequence**: A private, written warning from community leaders,
A public apology may be requested. providing clarity around the nature of the violation and an
explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning ### 2. Warning
Community Impact: A violation through a single incident or series of actions. **Community Impact**: A violation through a single incident or series
of actions.
Consequence: A warning with consequences for continued behavior. **Consequence**: A warning with consequences for continued
No interaction with the people involved, including unsolicited interaction with those enforcing this Community Code of Conduct, for a specified period of time. behavior. No interaction with the people involved, including
This includes avoiding interactions in community spaces as well as external channels like social media. unsolicited interaction with those enforcing the Code of Conduct, for
Violating these terms may lead to a temporary or permanent ban. a specified period of time. This includes avoiding interactions in
community spaces as well as external channels like social
media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban ### 3. Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior. **Community Impact**: A serious violation of community standards,
including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. **Consequence**: A temporary ban from any sort of interaction or
No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. public communication with the community for a specified period of
time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of
Conduct, is allowed during this period. Violating these terms may lead
to a permanent ban.
### 4. Permanent Ban ### 4. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Community Impact**: Demonstrating a pattern of violation of
community standards, including sustained inappropriate behavior,
harassment of an individual, or aggression toward or disparagement of
classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community. **Consequence**: A permanent ban from any sort of public interaction
within the community.
## Acceptable Use Policy
Violation of this Community Code of Conduct may also violate the Tailscale Acceptable Use Policy, which may result in suspension or termination of your Tailscale account.
For more information, please see the Tailscale Acceptable Use Policy, available at [tailscale.com/tailscale-aup](https://tailscale.com/tailscale-aup).
## Privacy
Please see the Tailscale [Privacy Policy](https://tailscale.com/privacy-policy) for more information about how Tailscale collects, uses, discloses and protects information.
## Attribution ## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at <https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>. This Code of Conduct is adapted from the [Contributor
Covenant][homepage], version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). Community Impact Guidelines were inspired by [Mozilla's code of
conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org [homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at <https://www.contributor-covenant.org/faq>. For answers to common questions about this code of conduct, see the
Translations are available at <https://www.contributor-covenant.org/translations>. FAQ at https://www.contributor-covenant.org/faq. Translations are
available at https://www.contributor-covenant.org/translations.

@ -7,15 +7,6 @@
# Tailscale images are currently built using https://github.com/tailscale/mkctr, # Tailscale images are currently built using https://github.com/tailscale/mkctr,
# and the build script can be found in ./build_docker.sh. # and the build script can be found in ./build_docker.sh.
# #
# If you want to build local images for testing, you can use make.
#
# To build a Tailscale image and push to the local docker registry:
#
# $ REPO=local/tailscale TAGS=v0.0.1 PLATFORM=local make publishdevimage
#
# To build a Tailscale image and push to a remote docker registry:
#
# $ REPO=<your-registry>/<your-repo>/tailscale TAGS=v0.0.1 make publishdevimage
# #
# This Dockerfile includes all the tailscale binaries. # This Dockerfile includes all the tailscale binaries.
# #
@ -36,7 +27,7 @@
# $ docker exec tailscaled tailscale status # $ docker exec tailscaled tailscale status
FROM golang:1.25-alpine AS build-env FROM golang:1.23-alpine AS build-env
WORKDIR /go/src/tailscale WORKDIR /go/src/tailscale
@ -71,15 +62,8 @@ RUN GOARCH=$TARGETARCH go install -ldflags="\
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \ -X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot -v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.22 FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables
# Alpine 3.19 replaced legacy iptables with nftables based implementation.
# Tailscale is used on some hosts that don't support nftables, such as Synology
# NAS, so link iptables back to legacy version. Hosts that don't require legacy
# iptables should be able to use Tailscale in nftables mode. See
# https://github.com/tailscale/tailscale/issues/17854
RUN rm /usr/sbin/iptables && ln -s /usr/sbin/iptables-legacy /usr/sbin/iptables
RUN rm /usr/sbin/ip6tables && ln -s /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables
COPY --from=build-env /go/bin/* /usr/local/bin/ COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be # For compat with the previous run.sh, although ideally you should be

@ -1,12 +1,5 @@
# Copyright (c) Tailscale Inc & AUTHORS # Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause # SPDX-License-Identifier: BSD-3-Clause
FROM alpine:3.22 FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iptables-legacy iproute2 ip6tables iputils RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables iputils
# Alpine 3.19 replaced legacy iptables with nftables based implementation.
# Tailscale is used on some hosts that don't support nftables, such as Synology
# NAS, so link iptables back to legacy version. Hosts that don't require legacy
# iptables should be able to use Tailscale in nftables mode. See
# https://github.com/tailscale/tailscale/issues/17854
RUN rm /usr/sbin/iptables && ln -s /usr/sbin/iptables-legacy /usr/sbin/iptables
RUN rm /usr/sbin/ip6tables && ln -s /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables

@ -8,9 +8,8 @@ PLATFORM ?= "flyio" ## flyio==linux/amd64. Set to "" to build all platforms.
vet: ## Run go vet vet: ## Run go vet
./tool/go vet ./... ./tool/go vet ./...
tidy: ## Run go mod tidy and update nix flake hashes tidy: ## Run go mod tidy
./tool/go mod tidy ./tool/go mod tidy
./update-flake.sh
lint: ## Run golangci-lint lint: ## Run golangci-lint
./tool/go run github.com/golangci/golangci-lint/cmd/golangci-lint run ./tool/go run github.com/golangci/golangci-lint/cmd/golangci-lint run
@ -18,36 +17,22 @@ lint: ## Run golangci-lint
updatedeps: ## Update depaware deps updatedeps: ## Update depaware deps
# depaware (via x/tools/go/packages) shells back to "go", so make sure the "go" # depaware (via x/tools/go/packages) shells back to "go", so make sure the "go"
# it finds in its $$PATH is the right one. # it finds in its $$PATH is the right one.
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --update --vendor --internal \ PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --update \
tailscale.com/cmd/tailscaled \ tailscale.com/cmd/tailscaled \
tailscale.com/cmd/tailscale \ tailscale.com/cmd/tailscale \
tailscale.com/cmd/derper \ tailscale.com/cmd/derper \
tailscale.com/cmd/k8s-operator \ tailscale.com/cmd/k8s-operator \
tailscale.com/cmd/stund \ tailscale.com/cmd/stund
tailscale.com/cmd/tsidp
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --update --goos=linux,darwin,windows,android,ios --vendor --internal \
tailscale.com/tsnet
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --update --file=depaware-minbox.txt --goos=linux --tags="$$(./tool/go run ./cmd/featuretags --min --add=cli)" --vendor --internal \
tailscale.com/cmd/tailscaled
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --update --file=depaware-min.txt --goos=linux --tags="$$(./tool/go run ./cmd/featuretags --min)" --vendor --internal \
tailscale.com/cmd/tailscaled
depaware: ## Run depaware checks depaware: ## Run depaware checks
# depaware (via x/tools/go/packages) shells back to "go", so make sure the "go" # depaware (via x/tools/go/packages) shells back to "go", so make sure the "go"
# it finds in its $$PATH is the right one. # it finds in its $$PATH is the right one.
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --check --vendor --internal \ PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --check \
tailscale.com/cmd/tailscaled \ tailscale.com/cmd/tailscaled \
tailscale.com/cmd/tailscale \ tailscale.com/cmd/tailscale \
tailscale.com/cmd/derper \ tailscale.com/cmd/derper \
tailscale.com/cmd/k8s-operator \ tailscale.com/cmd/k8s-operator \
tailscale.com/cmd/stund \ tailscale.com/cmd/stund
tailscale.com/cmd/tsidp
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --check --goos=linux,darwin,windows,android,ios --vendor --internal \
tailscale.com/tsnet
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --check --file=depaware-minbox.txt --goos=linux --tags="$$(./tool/go run ./cmd/featuretags --min --add=cli)" --vendor --internal \
tailscale.com/cmd/tailscaled
PATH="$$(./tool/go env GOROOT)/bin:$$PATH" ./tool/go run github.com/tailscale/depaware --check --file=depaware-min.txt --goos=linux --tags="$$(./tool/go run ./cmd/featuretags --min)" --vendor --internal \
tailscale.com/cmd/tailscaled
buildwindows: ## Build tailscale CLI for windows/amd64 buildwindows: ## Build tailscale CLI for windows/amd64
GOOS=windows GOARCH=amd64 ./tool/go install tailscale.com/cmd/tailscale tailscale.com/cmd/tailscaled GOOS=windows GOARCH=amd64 ./tool/go install tailscale.com/cmd/tailscale tailscale.com/cmd/tailscaled
@ -73,7 +58,7 @@ buildmultiarchimage: ## Build (and optionally push) multiarch docker image
check: staticcheck vet depaware buildwindows build386 buildlinuxarm buildwasm ## Perform basic checks and compilation tests check: staticcheck vet depaware buildwindows build386 buildlinuxarm buildwasm ## Perform basic checks and compilation tests
staticcheck: ## Run staticcheck.io checks staticcheck: ## Run staticcheck.io checks
./tool/go run honnef.co/go/tools/cmd/staticcheck -- $$(./tool/go run ./tool/listpkgs --ignore-3p ./...) ./tool/go run honnef.co/go/tools/cmd/staticcheck -- $$(./tool/go list ./... | grep -v tempfork)
kube-generate-all: kube-generate-deepcopy ## Refresh generated files for Tailscale Kubernetes Operator kube-generate-all: kube-generate-deepcopy ## Refresh generated files for Tailscale Kubernetes Operator
./tool/go generate ./cmd/k8s-operator ./tool/go generate ./cmd/k8s-operator
@ -101,60 +86,43 @@ pushspk: spk ## Push and install synology package on ${SYNO_HOST} host
scp tailscale.spk root@${SYNO_HOST}: scp tailscale.spk root@${SYNO_HOST}:
ssh root@${SYNO_HOST} /usr/syno/bin/synopkg install tailscale.spk ssh root@${SYNO_HOST} /usr/syno/bin/synopkg install tailscale.spk
.PHONY: check-image-repo publishdevimage: ## Build and publish tailscale image to location specified by ${REPO}
check-image-repo: @test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@if [ -z "$(REPO)" ]; then \ @test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
echo "REPO=... required; e.g. REPO=ghcr.io/$$USER/tailscale" >&2; \ @test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
exit 1; \ @test "${REPO}" != "tailscale/k8s-operator" || (echo "REPO=... must not be tailscale/k8s-operator" && exit 1)
fi @test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
@for repo in tailscale/tailscale ghcr.io/tailscale/tailscale \
tailscale/k8s-operator ghcr.io/tailscale/k8s-operator \
tailscale/k8s-nameserver ghcr.io/tailscale/k8s-nameserver \
tailscale/tsidp ghcr.io/tailscale/tsidp \
tailscale/k8s-proxy ghcr.io/tailscale/k8s-proxy; do \
if [ "$(REPO)" = "$$repo" ]; then \
echo "REPO=... must not be $$repo" >&2; \
exit 1; \
fi; \
done
publishdevimage: check-image-repo ## Build and publish tailscale image to location specified by ${REPO}
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=client ./build_docker.sh TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=client ./build_docker.sh
publishdevoperator: check-image-repo ## Build and publish k8s-operator image to location specified by ${REPO} publishdevoperator: ## Build and publish k8s-operator image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-operator" || (echo "REPO=... must not be tailscale/k8s-operator" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-operator ./build_docker.sh TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-operator ./build_docker.sh
publishdevnameserver: check-image-repo ## Build and publish k8s-nameserver image to location specified by ${REPO} publishdevnameserver: ## Build and publish k8s-nameserver image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@test "${REPO}" != "tailscale/tailscale" || (echo "REPO=... must not be tailscale/tailscale" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-nameserver" || (echo "REPO=... must not be tailscale/k8s-nameserver" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-nameserver" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-nameserver" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-nameserver ./build_docker.sh TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-nameserver ./build_docker.sh
publishdevtsidp: check-image-repo ## Build and publish tsidp image to location specified by ${REPO}
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=tsidp ./build_docker.sh
publishdevproxy: check-image-repo ## Build and publish k8s-proxy image to location specified by ${REPO}
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-proxy ./build_docker.sh
.PHONY: sshintegrationtest .PHONY: sshintegrationtest
sshintegrationtest: ## Run the SSH integration tests in various Docker containers sshintegrationtest: ## Run the SSH integration tests in various Docker containers
@GOOS=linux GOARCH=amd64 CGO_ENABLED=0 ./tool/go test -tags integrationtest -c ./ssh/tailssh -o ssh/tailssh/testcontainers/tailssh.test && \ @GOOS=linux GOARCH=amd64 ./tool/go test -tags integrationtest -c ./ssh/tailssh -o ssh/tailssh/testcontainers/tailssh.test && \
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 ./tool/go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \ GOOS=linux GOARCH=amd64 ./tool/go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \
echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \ echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \ echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:mantic" && docker build --build-arg="BASE=ubuntu:mantic" -t ssh-ubuntu-mantic ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \ echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \
echo "Testing on alpine:latest" && docker build --build-arg="BASE=alpine:latest" -t ssh-alpine-latest ssh/tailssh/testcontainers echo "Testing on alpine:latest" && docker build --build-arg="BASE=alpine:latest" -t ssh-alpine-latest ssh/tailssh/testcontainers
.PHONY: generate
generate: ## Generate code
./tool/go generate ./...
.PHONY: pin-github-actions
pin-github-actions:
./tool/go tool github.com/stacklok/frizbee actions .github/workflows
help: ## Show this help help: ## Show this help
@echo "" @echo "\nSpecify a command. The choices are:\n"
@echo "Specify a command. The choices are:" @grep -hE '^[0-9a-zA-Z_-]+:.*?## .*$$' ${MAKEFILE_LIST} | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[0;36m%-20s\033[m %s\n", $$1, $$2}'
@echo ""
@grep -hE '^[0-9a-zA-Z_-]+:.*?## .*$$' ${MAKEFILE_LIST} | sort | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[0;36m%-20s\033[m %s\n", $$1, $$2}'
@echo "" @echo ""
.PHONY: help .PHONY: help

@ -37,7 +37,7 @@ not open source.
## Building ## Building
We always require the latest Go release, currently Go 1.25. (While we build We always require the latest Go release, currently Go 1.23. (While we build
releases with our [Go fork](https://github.com/tailscale/go/), its use is not releases with our [Go fork](https://github.com/tailscale/go/), its use is not
required.) required.)
@ -71,7 +71,8 @@ We require [Developer Certificate of
Origin](https://en.wikipedia.org/wiki/Developer_Certificate_of_Origin) Origin](https://en.wikipedia.org/wiki/Developer_Certificate_of_Origin)
`Signed-off-by` lines in commits. `Signed-off-by` lines in commits.
See [commit-messages.md](docs/commit-messages.md) (or skim `git log`) for our commit message style. See `git log` for our commit message style. It's basically the same as
[Go's style](https://github.com/golang/go/wiki/CommitMessage).
## About Us ## About Us

@ -1 +1 @@
1.95.0 1.78.3

@ -12,20 +12,20 @@ package appc
import ( import (
"context" "context"
"fmt" "fmt"
"maps"
"net/netip" "net/netip"
"slices" "slices"
"strings" "strings"
"sync"
"time" "time"
"tailscale.com/syncs" xmaps "golang.org/x/exp/maps"
"tailscale.com/types/appctype" "golang.org/x/net/dns/dnsmessage"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/types/views" "tailscale.com/types/views"
"tailscale.com/util/clientmetric" "tailscale.com/util/clientmetric"
"tailscale.com/util/dnsname" "tailscale.com/util/dnsname"
"tailscale.com/util/eventbus"
"tailscale.com/util/execqueue" "tailscale.com/util/execqueue"
"tailscale.com/util/mak"
"tailscale.com/util/slicesx" "tailscale.com/util/slicesx"
) )
@ -116,6 +116,19 @@ func metricStoreRoutes(rate, nRoutes int64) {
recordMetric(nRoutes, metricStoreRoutesNBuckets, metricStoreRoutesN) recordMetric(nRoutes, metricStoreRoutesNBuckets, metricStoreRoutesN)
} }
// RouteInfo is a data structure used to persist the in memory state of an AppConnector
// so that we can know, even after a restart, which routes came from ACLs and which were
// learned from domains.
type RouteInfo struct {
// Control is the routes from the 'routes' section of an app connector acl.
Control []netip.Prefix `json:",omitempty"`
// Domains are the routes discovered by observing DNS lookups for configured domains.
Domains map[string][]netip.Addr `json:",omitempty"`
// Wildcards are the configured DNS lookup domains to observe. When a DNS query matches Wildcards,
// its result is added to Domains.
Wildcards []string `json:",omitempty"`
}
// AppConnector is an implementation of an AppConnector that performs // AppConnector is an implementation of an AppConnector that performs
// its function as a subsystem inside of a tailscale node. At the control plane // its function as a subsystem inside of a tailscale node. At the control plane
// side App Connector routing is configured in terms of domains rather than IP // side App Connector routing is configured in terms of domains rather than IP
@ -126,20 +139,14 @@ func metricStoreRoutes(rate, nRoutes int64) {
// routes not yet served by the AppConnector the local node configuration is // routes not yet served by the AppConnector the local node configuration is
// updated to advertise the new route. // updated to advertise the new route.
type AppConnector struct { type AppConnector struct {
// These fields are immutable after initialization.
logf logger.Logf logf logger.Logf
eventBus *eventbus.Bus
routeAdvertiser RouteAdvertiser routeAdvertiser RouteAdvertiser
pubClient *eventbus.Client
updatePub *eventbus.Publisher[appctype.RouteUpdate]
storePub *eventbus.Publisher[appctype.RouteInfo]
// hasStoredRoutes records whether the connector was initialized with // storeRoutesFunc will be called to persist routes if it is not nil.
// persisted route information. storeRoutesFunc func(*RouteInfo) error
hasStoredRoutes bool
// mu guards the fields that follow // mu guards the fields that follow
mu syncs.Mutex mu sync.Mutex
// domains is a map of lower case domain names with no trailing dot, to an // domains is a map of lower case domain names with no trailing dot, to an
// ordered list of resolved IP addresses. // ordered list of resolved IP addresses.
@ -158,83 +165,53 @@ type AppConnector struct {
writeRateDay *rateLogger writeRateDay *rateLogger
} }
// Config carries the settings for an [AppConnector].
type Config struct {
// Logf is the logger to which debug logs from the connector will be sent.
// It must be non-nil.
Logf logger.Logf
// EventBus receives events when the collection of routes maintained by the
// connector is updated. It must be non-nil.
EventBus *eventbus.Bus
// RouteAdvertiser allows the connector to update the set of advertised routes.
RouteAdvertiser RouteAdvertiser
// RouteInfo, if non-nil, use used as the initial set of routes for the
// connector. If nil, the connector starts empty.
RouteInfo *appctype.RouteInfo
// HasStoredRoutes indicates that the connector should assume stored routes.
HasStoredRoutes bool
}
// NewAppConnector creates a new AppConnector. // NewAppConnector creates a new AppConnector.
func NewAppConnector(c Config) *AppConnector { func NewAppConnector(logf logger.Logf, routeAdvertiser RouteAdvertiser, routeInfo *RouteInfo, storeRoutesFunc func(*RouteInfo) error) *AppConnector {
switch {
case c.Logf == nil:
panic("missing logger")
case c.EventBus == nil:
panic("missing event bus")
}
ec := c.EventBus.Client("appc.AppConnector")
ac := &AppConnector{ ac := &AppConnector{
logf: logger.WithPrefix(c.Logf, "appc: "), logf: logger.WithPrefix(logf, "appc: "),
eventBus: c.EventBus, routeAdvertiser: routeAdvertiser,
pubClient: ec, storeRoutesFunc: storeRoutesFunc,
updatePub: eventbus.Publish[appctype.RouteUpdate](ec),
storePub: eventbus.Publish[appctype.RouteInfo](ec),
routeAdvertiser: c.RouteAdvertiser,
hasStoredRoutes: c.HasStoredRoutes,
} }
if c.RouteInfo != nil { if routeInfo != nil {
ac.domains = c.RouteInfo.Domains ac.domains = routeInfo.Domains
ac.wildcards = c.RouteInfo.Wildcards ac.wildcards = routeInfo.Wildcards
ac.controlRoutes = c.RouteInfo.Control ac.controlRoutes = routeInfo.Control
} }
ac.writeRateMinute = newRateLogger(time.Now, time.Minute, func(c int64, s time.Time, ln int64) { ac.writeRateMinute = newRateLogger(time.Now, time.Minute, func(c int64, s time.Time, l int64) {
ac.logf("routeInfo write rate: %d in minute starting at %v (%d routes)", c, s, ln) ac.logf("routeInfo write rate: %d in minute starting at %v (%d routes)", c, s, l)
metricStoreRoutes(c, ln) metricStoreRoutes(c, l)
}) })
ac.writeRateDay = newRateLogger(time.Now, 24*time.Hour, func(c int64, s time.Time, ln int64) { ac.writeRateDay = newRateLogger(time.Now, 24*time.Hour, func(c int64, s time.Time, l int64) {
ac.logf("routeInfo write rate: %d in 24 hours starting at %v (%d routes)", c, s, ln) ac.logf("routeInfo write rate: %d in 24 hours starting at %v (%d routes)", c, s, l)
}) })
return ac return ac
} }
// ShouldStoreRoutes returns true if the appconnector was created with the controlknob on // ShouldStoreRoutes returns true if the appconnector was created with the controlknob on
// and is storing its discovered routes persistently. // and is storing its discovered routes persistently.
func (e *AppConnector) ShouldStoreRoutes() bool { return e.hasStoredRoutes } func (e *AppConnector) ShouldStoreRoutes() bool {
return e.storeRoutesFunc != nil
}
// storeRoutesLocked takes the current state of the AppConnector and persists it // storeRoutesLocked takes the current state of the AppConnector and persists it
func (e *AppConnector) storeRoutesLocked() { func (e *AppConnector) storeRoutesLocked() error {
if e.storePub.ShouldPublish() { if !e.ShouldStoreRoutes() {
// log write rate and write size return nil
numRoutes := int64(len(e.controlRoutes))
for _, rs := range e.domains {
numRoutes += int64(len(rs))
}
e.writeRateMinute.update(numRoutes)
e.writeRateDay.update(numRoutes)
e.storePub.Publish(appctype.RouteInfo{
// Clone here, as the subscriber will handle these outside our lock.
Control: slices.Clone(e.controlRoutes),
Domains: maps.Clone(e.domains),
Wildcards: slices.Clone(e.wildcards),
})
} }
// log write rate and write size
numRoutes := int64(len(e.controlRoutes))
for _, rs := range e.domains {
numRoutes += int64(len(rs))
}
e.writeRateMinute.update(numRoutes)
e.writeRateDay.update(numRoutes)
return e.storeRoutesFunc(&RouteInfo{
Control: e.controlRoutes,
Domains: e.domains,
Wildcards: e.wildcards,
})
} }
// ClearRoutes removes all route state from the AppConnector. // ClearRoutes removes all route state from the AppConnector.
@ -244,8 +221,7 @@ func (e *AppConnector) ClearRoutes() error {
e.controlRoutes = nil e.controlRoutes = nil
e.domains = nil e.domains = nil
e.wildcards = nil e.wildcards = nil
e.storeRoutesLocked() return e.storeRoutesLocked()
return nil
} }
// UpdateDomainsAndRoutes starts an asynchronous update of the configuration // UpdateDomainsAndRoutes starts an asynchronous update of the configuration
@ -274,18 +250,6 @@ func (e *AppConnector) Wait(ctx context.Context) {
e.queue.Wait(ctx) e.queue.Wait(ctx)
} }
// Close closes the connector and cleans up resources associated with it.
// It is safe (and a noop) to call Close on nil.
func (e *AppConnector) Close() {
if e == nil {
return
}
e.mu.Lock()
defer e.mu.Unlock()
e.queue.Shutdown() // TODO(creachadair): Should we wait for it too?
e.pubClient.Close()
}
func (e *AppConnector) updateDomains(domains []string) { func (e *AppConnector) updateDomains(domains []string) {
e.mu.Lock() e.mu.Lock()
defer e.mu.Unlock() defer e.mu.Unlock()
@ -317,29 +281,21 @@ func (e *AppConnector) updateDomains(domains []string) {
} }
} }
// Everything left in oldDomains is a domain we're no longer tracking and we // Everything left in oldDomains is a domain we're no longer tracking
// can unadvertise the routes. // and if we are storing route info we can unadvertise the routes
if e.hasStoredRoutes { if e.ShouldStoreRoutes() {
toRemove := []netip.Prefix{} toRemove := []netip.Prefix{}
for _, addrs := range oldDomains { for _, addrs := range oldDomains {
for _, a := range addrs { for _, a := range addrs {
toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen())) toRemove = append(toRemove, netip.PrefixFrom(a, a.BitLen()))
} }
} }
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
if len(toRemove) != 0 { e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
if ra := e.routeAdvertiser; ra != nil {
e.queue.Add(func() {
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", slicesx.MapKeys(oldDomains), toRemove, err)
}
})
}
e.updatePub.Publish(appctype.RouteUpdate{Unadvertise: toRemove})
} }
} }
e.logf("handling domains: %v and wildcards: %v", slicesx.MapKeys(e.domains), e.wildcards) e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
} }
// updateRoutes merges the supplied routes into the currently configured routes. The routes supplied // updateRoutes merges the supplied routes into the currently configured routes. The routes supplied
@ -355,12 +311,18 @@ func (e *AppConnector) updateRoutes(routes []netip.Prefix) {
return return
} }
if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil {
e.logf("failed to advertise routes: %v: %v", routes, err)
return
}
var toRemove []netip.Prefix var toRemove []netip.Prefix
// If we know e.controlRoutes is a good representation of what should be in // If we're storing routes and know e.controlRoutes is a good
// AdvertisedRoutes we can stop advertising routes that used to be in // representation of what should be in AdvertisedRoutes we can stop
// e.controlRoutes but are not in routes. // advertising routes that used to be in e.controlRoutes but are not
if e.hasStoredRoutes { // in routes.
if e.ShouldStoreRoutes() {
toRemove = routesWithout(e.controlRoutes, routes) toRemove = routesWithout(e.controlRoutes, routes)
} }
@ -377,23 +339,14 @@ nextRoute:
} }
} }
if e.routeAdvertiser != nil { if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.queue.Add(func() { e.logf("failed to unadvertise routes: %v: %v", toRemove, err)
if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil {
e.logf("failed to advertise routes: %v: %v", routes, err)
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes: %v: %v", toRemove, err)
}
})
} }
e.updatePub.Publish(appctype.RouteUpdate{
Advertise: routes,
Unadvertise: toRemove,
})
e.controlRoutes = routes e.controlRoutes = routes
e.storeRoutesLocked() if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
} }
// Domains returns the currently configured domain list. // Domains returns the currently configured domain list.
@ -401,7 +354,7 @@ func (e *AppConnector) Domains() views.Slice[string] {
e.mu.Lock() e.mu.Lock()
defer e.mu.Unlock() defer e.mu.Unlock()
return views.SliceOf(slicesx.MapKeys(e.domains)) return views.SliceOf(xmaps.Keys(e.domains))
} }
// DomainRoutes returns a map of domains to resolved IP // DomainRoutes returns a map of domains to resolved IP
@ -418,6 +371,123 @@ func (e *AppConnector) DomainRoutes() map[string][]netip.Addr {
return drCopy return drCopy
} }
// ObserveDNSResponse is a callback invoked by the DNS resolver when a DNS
// response is being returned over the PeerAPI. The response is parsed and
// matched against the configured domains, if matched the routeAdvertiser is
// advised to advertise the discovered route.
func (e *AppConnector) ObserveDNSResponse(res []byte) {
var p dnsmessage.Parser
if _, err := p.Start(res); err != nil {
return
}
if err := p.SkipAllQuestions(); err != nil {
return
}
// cnameChain tracks a chain of CNAMEs for a given query in order to reverse
// a CNAME chain back to the original query for flattening. The keys are
// CNAME record targets, and the value is the name the record answers, so
// for www.example.com CNAME example.com, the map would contain
// ["example.com"] = "www.example.com".
var cnameChain map[string]string
// addressRecords is a list of address records found in the response.
var addressRecords map[string][]netip.Addr
for {
h, err := p.AnswerHeader()
if err == dnsmessage.ErrSectionDone {
break
}
if err != nil {
return
}
if h.Class != dnsmessage.ClassINET {
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
switch h.Type {
case dnsmessage.TypeCNAME, dnsmessage.TypeA, dnsmessage.TypeAAAA:
default:
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
domain := strings.TrimSuffix(strings.ToLower(h.Name.String()), ".")
if len(domain) == 0 {
continue
}
if h.Type == dnsmessage.TypeCNAME {
res, err := p.CNAMEResource()
if err != nil {
return
}
cname := strings.TrimSuffix(strings.ToLower(res.CNAME.String()), ".")
if len(cname) == 0 {
continue
}
mak.Set(&cnameChain, cname, domain)
continue
}
switch h.Type {
case dnsmessage.TypeA:
r, err := p.AResource()
if err != nil {
return
}
addr := netip.AddrFrom4(r.A)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
case dnsmessage.TypeAAAA:
r, err := p.AAAAResource()
if err != nil {
return
}
addr := netip.AddrFrom16(r.AAAA)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
default:
if err := p.SkipAnswer(); err != nil {
return
}
continue
}
}
e.mu.Lock()
defer e.mu.Unlock()
for domain, addrs := range addressRecords {
domain, isRouted := e.findRoutedDomainLocked(domain, cnameChain)
// domain and none of the CNAMEs in the chain are routed
if !isRouted {
continue
}
// advertise each address we have learned for the routed domain, that
// was not already known.
var toAdvertise []netip.Prefix
for _, addr := range addrs {
if !e.isAddrKnownLocked(domain, addr) {
toAdvertise = append(toAdvertise, netip.PrefixFrom(addr, addr.BitLen()))
}
}
if len(toAdvertise) > 0 {
e.logf("[v2] observed new routes for %s: %s", domain, toAdvertise)
e.scheduleAdvertisement(domain, toAdvertise...)
}
}
}
// starting from the given domain that resolved to an address, find it, or any // starting from the given domain that resolved to an address, find it, or any
// of the domains in the CNAME chain toward resolving it, that are routed // of the domains in the CNAME chain toward resolving it, that are routed
// domains, returning the routed domain name and a bool indicating whether a // domains, returning the routed domain name and a bool indicating whether a
@ -472,13 +542,10 @@ func (e *AppConnector) isAddrKnownLocked(domain string, addr netip.Addr) bool {
// associated with the given domain. // associated with the given domain.
func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Prefix) { func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Prefix) {
e.queue.Add(func() { e.queue.Add(func() {
if e.routeAdvertiser != nil { if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil {
if err := e.routeAdvertiser.AdvertiseRoute(routes...); err != nil { e.logf("failed to advertise routes for %s: %v: %v", domain, routes, err)
e.logf("failed to advertise routes for %s: %v: %v", domain, routes, err) return
return
}
} }
e.updatePub.Publish(appctype.RouteUpdate{Advertise: routes})
e.mu.Lock() e.mu.Lock()
defer e.mu.Unlock() defer e.mu.Unlock()
@ -492,7 +559,9 @@ func (e *AppConnector) scheduleAdvertisement(domain string, routes ...netip.Pref
e.logf("[v2] advertised route for %v: %v", domain, addr) e.logf("[v2] advertised route for %v: %v", domain, addr)
} }
} }
e.storeRoutesLocked() if err := e.storeRoutesLocked(); err != nil {
e.logf("failed to store route info: %v", err)
}
}) })
} }
@ -510,8 +579,8 @@ func (e *AppConnector) addDomainAddrLocked(domain string, addr netip.Addr) {
slices.SortFunc(e.domains[domain], compareAddr) slices.SortFunc(e.domains[domain], compareAddr)
} }
func compareAddr(a, b netip.Addr) int { func compareAddr(l, r netip.Addr) int {
return a.Compare(b) return l.Compare(r)
} }
// routesWithout returns a without b where a and b // routesWithout returns a without b where a and b

@ -4,40 +4,35 @@
package appc package appc
import ( import (
stdcmp "cmp" "context"
"fmt"
"net/netip" "net/netip"
"reflect" "reflect"
"slices" "slices"
"sync/atomic"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" xmaps "golang.org/x/exp/maps"
"github.com/google/go-cmp/cmp/cmpopts"
"golang.org/x/net/dns/dnsmessage" "golang.org/x/net/dns/dnsmessage"
"tailscale.com/appc/appctest" "tailscale.com/appc/appctest"
"tailscale.com/tstest" "tailscale.com/tstest"
"tailscale.com/types/appctype"
"tailscale.com/util/clientmetric" "tailscale.com/util/clientmetric"
"tailscale.com/util/eventbus/eventbustest"
"tailscale.com/util/mak" "tailscale.com/util/mak"
"tailscale.com/util/must" "tailscale.com/util/must"
"tailscale.com/util/slicesx"
) )
func fakeStoreRoutes(*RouteInfo) error { return nil }
func TestUpdateDomains(t *testing.T) { func TestUpdateDomains(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
a := NewAppConnector(Config{ ctx := context.Background()
Logf: t.Logf, var a *AppConnector
EventBus: bus, if shouldStore {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, &RouteInfo{}, fakeStoreRoutes)
}) } else {
t.Cleanup(a.Close) a = NewAppConnector(t.Logf, &appctest.RouteCollector{}, nil, nil)
}
a.UpdateDomains([]string{"example.com"}) a.UpdateDomains([]string{"example.com"})
a.Wait(ctx) a.Wait(ctx)
if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) { if got, want := a.Domains().AsSlice(), []string{"example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
@ -55,32 +50,26 @@ func TestUpdateDomains(t *testing.T) {
// domains are explicitly downcased on set. // domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"}) a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx) a.Wait(ctx)
if got, want := slicesx.MapKeys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) { if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
} }
} }
func TestUpdateRoutes(t *testing.T) { func TestUpdateRoutes(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
a.updateDomains([]string{"*.example.com"}) a.updateDomains([]string{"*.example.com"})
// This route should be collapsed into the range // This route should be collapsed into the range
if err := a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1")); err != nil { a.ObserveDNSResponse(dnsResponse("a.example.com.", "192.0.2.1"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) { if !slices.Equal(rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) {
@ -88,14 +77,11 @@ func TestUpdateRoutes(t *testing.T) {
} }
// This route should not be collapsed or removed // This route should not be collapsed or removed
if err := a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1")); err != nil { a.ObserveDNSResponse(dnsResponse("b.example.com.", "192.0.0.1"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")} routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24"), netip.MustParsePrefix("192.0.0.1/32")}
a.updateRoutes(routes) a.updateRoutes(routes)
a.Wait(ctx)
slices.SortFunc(rc.Routes(), prefixCompare) slices.SortFunc(rc.Routes(), prefixCompare)
rc.SetRoutes(slices.Compact(rc.Routes())) rc.SetRoutes(slices.Compact(rc.Routes()))
@ -111,76 +97,41 @@ func TestUpdateRoutes(t *testing.T) {
if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) { if !slices.EqualFunc(rc.RemovedRoutes(), wantRemoved, prefixEqual) {
t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes()) t.Fatalf("unexpected removed routes: %v", rc.RemovedRoutes())
} }
if err := eventbustest.Expect(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.2.1/32")}),
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.1/32")}),
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{
Advertise: prefixes("192.0.0.1/32", "192.0.2.0/24"),
Unadvertise: prefixes("192.0.2.1/32"),
}),
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
} }
} }
func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) { func TestUpdateRoutesUnadvertisesContainedRoutes(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus)
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")}) mak.Set(&a.domains, "example.com", []netip.Addr{netip.MustParseAddr("192.0.2.1")})
rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")}) rc.SetRoutes([]netip.Prefix{netip.MustParsePrefix("192.0.2.1/32")})
routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")} routes := []netip.Prefix{netip.MustParsePrefix("192.0.2.0/24")}
a.updateRoutes(routes) a.updateRoutes(routes)
a.Wait(ctx)
if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) { if !slices.EqualFunc(routes, rc.Routes(), prefixEqual) {
t.Fatalf("got %v, want %v", rc.Routes(), routes) t.Fatalf("got %v, want %v", rc.Routes(), routes)
} }
if err := eventbustest.ExpectExactly(w,
eqUpdate(appctype.RouteUpdate{
Advertise: prefixes("192.0.2.0/24"),
Unadvertise: prefixes("192.0.2.1/32"),
}),
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
} }
} }
func TestDomainRoutes(t *testing.T) { func TestDomainRoutes(t *testing.T) {
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus)
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
})
t.Cleanup(a.Close)
a.updateDomains([]string{"example.com"})
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")); err != nil {
t.Errorf("ObserveDNSResponse: %v", err)
} }
a.Wait(t.Context()) a.updateDomains([]string{"example.com"})
a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
a.Wait(context.Background())
want := map[string][]netip.Addr{ want := map[string][]netip.Addr{
"example.com": {netip.MustParseAddr("192.0.0.8")}, "example.com": {netip.MustParseAddr("192.0.0.8")},
@ -189,34 +140,22 @@ func TestDomainRoutes(t *testing.T) {
if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) { if got := a.DomainRoutes(); !reflect.DeepEqual(got, want) {
t.Fatalf("DomainRoutes: got %v, want %v", got, want) t.Fatalf("DomainRoutes: got %v, want %v", got, want)
} }
if err := eventbustest.ExpectExactly(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.8/32")}),
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
} }
} }
func TestObserveDNSResponse(t *testing.T) { func TestObserveDNSResponse(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
// a has no domains configured, so it should not advertise any routes // a has no domains configured, so it should not advertise any routes
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")); err != nil { a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
t.Errorf("ObserveDNSResponse: %v", err)
}
if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) { if got, want := rc.Routes(), ([]netip.Prefix)(nil); !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
@ -224,9 +163,7 @@ func TestObserveDNSResponse(t *testing.T) {
wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")} wantRoutes := []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}
a.updateDomains([]string{"example.com"}) a.updateDomains([]string{"example.com"})
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8")); err != nil { a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.0.8"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
@ -235,9 +172,7 @@ func TestObserveDNSResponse(t *testing.T) {
// a CNAME record chain should result in a route being added if the chain // a CNAME record chain should result in a route being added if the chain
// matches a routed domain. // matches a routed domain.
a.updateDomains([]string{"www.example.com", "example.com"}) a.updateDomains([]string{"www.example.com", "example.com"})
if err := a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com.")); err != nil { a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.9", "www.example.com.", "chain.example.com.", "example.com."))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.9/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
@ -246,9 +181,7 @@ func TestObserveDNSResponse(t *testing.T) {
// a CNAME record chain should result in a route being added if the chain // a CNAME record chain should result in a route being added if the chain
// even if only found in the middle of the chain // even if only found in the middle of the chain
if err := a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org.")); err != nil { a.ObserveDNSResponse(dnsCNAMEResponse("192.0.0.10", "outside.example.org.", "www.example.com.", "example.org."))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("192.0.0.10/32"))
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
@ -257,18 +190,14 @@ func TestObserveDNSResponse(t *testing.T) {
wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128")) wantRoutes = append(wantRoutes, netip.MustParsePrefix("2001:db8::1/128"))
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1")); err != nil { a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) { if got, want := rc.Routes(), wantRoutes; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want) t.Errorf("got %v; want %v", got, want)
} }
// don't re-advertise routes that have already been advertised // don't re-advertise routes that have already been advertised
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1")); err != nil { a.ObserveDNSResponse(dnsResponse("example.com.", "2001:db8::1"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) { if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes) t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
@ -278,9 +207,7 @@ func TestObserveDNSResponse(t *testing.T) {
pfx := netip.MustParsePrefix("192.0.2.0/24") pfx := netip.MustParsePrefix("192.0.2.0/24")
a.updateRoutes([]netip.Prefix{pfx}) a.updateRoutes([]netip.Prefix{pfx})
wantRoutes = append(wantRoutes, pfx) wantRoutes = append(wantRoutes, pfx)
if err := a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1")); err != nil { a.ObserveDNSResponse(dnsResponse("example.com.", "192.0.2.1"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if !slices.Equal(rc.Routes(), wantRoutes) { if !slices.Equal(rc.Routes(), wantRoutes) {
t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes) t.Errorf("rc.Routes(): got %v; want %v", rc.Routes(), wantRoutes)
@ -288,43 +215,22 @@ func TestObserveDNSResponse(t *testing.T) {
if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) { if !slices.Contains(a.domains["example.com"], netip.MustParseAddr("192.0.2.1")) {
t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"]) t.Errorf("missing %v from %v", "192.0.2.1", a.domains["exmaple.com"])
} }
if err := eventbustest.ExpectExactly(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.8/32")}), // from initial DNS response, via example.com
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.9/32")}), // from CNAME response
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.10/32")}), // from CNAME response, mid-chain
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("2001:db8::1/128")}), // v6 DNS response
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.2.0/24")}), // additional prefix
eventbustest.Type[appctype.RouteInfo](),
// N.B. no update for 192.0.2.1 as it is already covered
); err != nil {
t.Error(err)
}
} }
} }
func TestWildcardDomains(t *testing.T) { func TestWildcardDomains(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
a.updateDomains([]string{"*.example.com"}) a.updateDomains([]string{"*.example.com"})
if err := a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8")); err != nil { a.ObserveDNSResponse(dnsResponse("foo.example.com.", "192.0.0.8"))
t.Errorf("ObserveDNSResponse: %v", err)
}
a.Wait(ctx) a.Wait(ctx)
if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) { if got, want := rc.Routes(), []netip.Prefix{netip.MustParsePrefix("192.0.0.8/32")}; !slices.Equal(got, want) {
t.Errorf("routes: got %v; want %v", got, want) t.Errorf("routes: got %v; want %v", got, want)
@ -346,13 +252,6 @@ func TestWildcardDomains(t *testing.T) {
if len(a.wildcards) != 1 { if len(a.wildcards) != 1 {
t.Errorf("expected only one wildcard domain, got %v", a.wildcards) t.Errorf("expected only one wildcard domain, got %v", a.wildcards)
} }
if err := eventbustest.ExpectExactly(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("192.0.0.8/32")}),
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
} }
} }
@ -468,10 +367,8 @@ func prefixes(in ...string) []netip.Prefix {
} }
func TestUpdateRouteRouteRemoval(t *testing.T) { func TestUpdateRouteRouteRemoval(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) { assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
@ -483,14 +380,12 @@ func TestUpdateRouteRouteRemoval(t *testing.T) {
} }
} }
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
// nothing has yet been advertised // nothing has yet been advertised
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{}) assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
@ -513,21 +408,12 @@ func TestUpdateRouteRouteRemoval(t *testing.T) {
wantRemovedRoutes = prefixes("1.2.3.2/32") wantRemovedRoutes = prefixes("1.2.3.2/32")
} }
assertRoutes("removal", wantRoutes, wantRemovedRoutes) assertRoutes("removal", wantRoutes, wantRemovedRoutes)
if err := eventbustest.Expect(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.1/32", "1.2.3.2/32")}), // no duplicates here
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
} }
} }
func TestUpdateDomainRouteRemoval(t *testing.T) { func TestUpdateDomainRouteRemoval(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) { assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
@ -539,14 +425,12 @@ func TestUpdateDomainRouteRemoval(t *testing.T) {
} }
} }
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{}) assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{}) a.UpdateDomainsAndRoutes([]string{"a.example.com", "b.example.com"}, []netip.Prefix{})
@ -554,16 +438,10 @@ func TestUpdateDomainRouteRemoval(t *testing.T) {
// adding domains doesn't immediately cause any routes to be advertised // adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{}) assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
for _, res := range [][]byte{ a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
dnsResponse("a.example.com.", "1.2.3.1"), a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
dnsResponse("a.example.com.", "1.2.3.2"), a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.3"))
dnsResponse("b.example.com.", "1.2.3.3"), a.ObserveDNSResponse(dnsResponse("b.example.com.", "1.2.3.4"))
dnsResponse("b.example.com.", "1.2.3.4"),
} {
if err := a.ObserveDNSResponse(res); err != nil {
t.Errorf("ObserveDNSResponse: %v", err)
}
}
a.Wait(ctx) a.Wait(ctx)
// observing dns responses causes routes to be advertised // observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{}) assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
@ -579,30 +457,12 @@ func TestUpdateDomainRouteRemoval(t *testing.T) {
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32") wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
} }
assertRoutes("removal", wantRoutes, wantRemovedRoutes) assertRoutes("removal", wantRoutes, wantRemovedRoutes)
wantEvents := []any{
// Each DNS record observed triggers an update.
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.1/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.2/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.3/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.4/32")}),
}
if shouldStore {
wantEvents = append(wantEvents, eqUpdate(appctype.RouteUpdate{
Unadvertise: prefixes("1.2.3.3/32", "1.2.3.4/32"),
}))
}
if err := eventbustest.Expect(w, wantEvents...); err != nil {
t.Error(err)
}
} }
} }
func TestUpdateWildcardRouteRemoval(t *testing.T) { func TestUpdateWildcardRouteRemoval(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
for _, shouldStore := range []bool{false, true} { for _, shouldStore := range []bool{false, true} {
w := eventbustest.NewWatcher(t, bus) ctx := context.Background()
rc := &appctest.RouteCollector{} rc := &appctest.RouteCollector{}
assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) { assertRoutes := func(prefix string, routes, removedRoutes []netip.Prefix) {
@ -614,14 +474,12 @@ func TestUpdateWildcardRouteRemoval(t *testing.T) {
} }
} }
a := NewAppConnector(Config{ var a *AppConnector
Logf: t.Logf, if shouldStore {
EventBus: bus, a = NewAppConnector(t.Logf, rc, &RouteInfo{}, fakeStoreRoutes)
RouteAdvertiser: rc, } else {
HasStoredRoutes: shouldStore, a = NewAppConnector(t.Logf, rc, nil, nil)
}) }
t.Cleanup(a.Close)
assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{}) assertRoutes("appc init", []netip.Prefix{}, []netip.Prefix{})
a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{}) a.UpdateDomainsAndRoutes([]string{"a.example.com", "*.b.example.com"}, []netip.Prefix{})
@ -629,16 +487,10 @@ func TestUpdateWildcardRouteRemoval(t *testing.T) {
// adding domains doesn't immediately cause any routes to be advertised // adding domains doesn't immediately cause any routes to be advertised
assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{}) assertRoutes("update domains", []netip.Prefix{}, []netip.Prefix{})
for _, res := range [][]byte{ a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.1"))
dnsResponse("a.example.com.", "1.2.3.1"), a.ObserveDNSResponse(dnsResponse("a.example.com.", "1.2.3.2"))
dnsResponse("a.example.com.", "1.2.3.2"), a.ObserveDNSResponse(dnsResponse("1.b.example.com.", "1.2.3.3"))
dnsResponse("1.b.example.com.", "1.2.3.3"), a.ObserveDNSResponse(dnsResponse("2.b.example.com.", "1.2.3.4"))
dnsResponse("2.b.example.com.", "1.2.3.4"),
} {
if err := a.ObserveDNSResponse(res); err != nil {
t.Errorf("ObserveDNSResponse: %v", err)
}
}
a.Wait(ctx) a.Wait(ctx)
// observing dns responses causes routes to be advertised // observing dns responses causes routes to be advertised
assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{}) assertRoutes("observed dns", prefixes("1.2.3.1/32", "1.2.3.2/32", "1.2.3.3/32", "1.2.3.4/32"), []netip.Prefix{})
@ -654,22 +506,6 @@ func TestUpdateWildcardRouteRemoval(t *testing.T) {
wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32") wantRemovedRoutes = prefixes("1.2.3.3/32", "1.2.3.4/32")
} }
assertRoutes("removal", wantRoutes, wantRemovedRoutes) assertRoutes("removal", wantRoutes, wantRemovedRoutes)
wantEvents := []any{
// Each DNS record observed triggers an update.
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.1/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.2/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.3/32")}),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("1.2.3.4/32")}),
}
if shouldStore {
wantEvents = append(wantEvents, eqUpdate(appctype.RouteUpdate{
Unadvertise: prefixes("1.2.3.3/32", "1.2.3.4/32"),
}))
}
if err := eventbustest.Expect(w, wantEvents...); err != nil {
t.Error(err)
}
} }
} }
@ -766,107 +602,3 @@ func TestMetricBucketsAreSorted(t *testing.T) {
t.Errorf("metricStoreRoutesNBuckets must be in order") t.Errorf("metricStoreRoutesNBuckets must be in order")
} }
} }
// TestUpdateRoutesDeadlock is a regression test for a deadlock in
// LocalBackend<->AppConnector interaction. When using real LocalBackend as the
// routeAdvertiser, calls to Advertise/UnadvertiseRoutes can end up calling
// back into AppConnector via authReconfig. If everything is called
// synchronously, this results in a deadlock on AppConnector.mu.
//
// TODO(creachadair, 2025-09-18): Remove this along with the advertiser
// interface once the LocalBackend is switched to use the event bus and the
// tests have been updated not to need it.
func TestUpdateRoutesDeadlock(t *testing.T) {
ctx := t.Context()
bus := eventbustest.NewBus(t)
w := eventbustest.NewWatcher(t, bus)
rc := &appctest.RouteCollector{}
a := NewAppConnector(Config{
Logf: t.Logf,
EventBus: bus,
RouteAdvertiser: rc,
HasStoredRoutes: true,
})
t.Cleanup(a.Close)
advertiseCalled := new(atomic.Bool)
unadvertiseCalled := new(atomic.Bool)
rc.AdvertiseCallback = func() {
// Call something that requires a.mu to be held.
a.DomainRoutes()
advertiseCalled.Store(true)
}
rc.UnadvertiseCallback = func() {
// Call something that requires a.mu to be held.
a.DomainRoutes()
unadvertiseCalled.Store(true)
}
a.updateDomains([]string{"example.com"})
a.Wait(ctx)
// Trigger rc.AdveriseRoute.
a.updateRoutes(
[]netip.Prefix{
netip.MustParsePrefix("127.0.0.1/32"),
netip.MustParsePrefix("127.0.0.2/32"),
},
)
a.Wait(ctx)
// Trigger rc.UnadveriseRoute.
a.updateRoutes(
[]netip.Prefix{
netip.MustParsePrefix("127.0.0.1/32"),
},
)
a.Wait(ctx)
if !advertiseCalled.Load() {
t.Error("AdvertiseRoute was not called")
}
if !unadvertiseCalled.Load() {
t.Error("UnadvertiseRoute was not called")
}
if want := []netip.Prefix{netip.MustParsePrefix("127.0.0.1/32")}; !slices.Equal(slices.Compact(rc.Routes()), want) {
t.Fatalf("got %v, want %v", rc.Routes(), want)
}
if err := eventbustest.ExpectExactly(w,
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("127.0.0.1/32", "127.0.0.2/32")}),
eventbustest.Type[appctype.RouteInfo](),
eqUpdate(appctype.RouteUpdate{Advertise: prefixes("127.0.0.1/32"), Unadvertise: prefixes("127.0.0.2/32")}),
eventbustest.Type[appctype.RouteInfo](),
); err != nil {
t.Error(err)
}
}
type textUpdate struct {
Advertise []string
Unadvertise []string
}
func routeUpdateToText(u appctype.RouteUpdate) textUpdate {
var out textUpdate
for _, p := range u.Advertise {
out.Advertise = append(out.Advertise, p.String())
}
for _, p := range u.Unadvertise {
out.Unadvertise = append(out.Unadvertise, p.String())
}
return out
}
// eqUpdate generates an eventbus test filter that matches a appctype.RouteUpdate
// message equal to want, or reports an error giving a human-readable diff.
func eqUpdate(want appctype.RouteUpdate) func(appctype.RouteUpdate) error {
return func(got appctype.RouteUpdate) error {
if diff := cmp.Diff(routeUpdateToText(got), routeUpdateToText(want),
cmpopts.SortSlices(stdcmp.Less[string]),
); diff != "" {
return fmt.Errorf("wrong update (-got, +want):\n%s", diff)
}
return nil
}
}

@ -11,22 +11,12 @@ import (
// RouteCollector is a test helper that collects the list of routes advertised // RouteCollector is a test helper that collects the list of routes advertised
type RouteCollector struct { type RouteCollector struct {
// AdvertiseCallback (optional) is called synchronously from
// AdvertiseRoute.
AdvertiseCallback func()
// UnadvertiseCallback (optional) is called synchronously from
// UnadvertiseRoute.
UnadvertiseCallback func()
routes []netip.Prefix routes []netip.Prefix
removedRoutes []netip.Prefix removedRoutes []netip.Prefix
} }
func (rc *RouteCollector) AdvertiseRoute(pfx ...netip.Prefix) error { func (rc *RouteCollector) AdvertiseRoute(pfx ...netip.Prefix) error {
rc.routes = append(rc.routes, pfx...) rc.routes = append(rc.routes, pfx...)
if rc.AdvertiseCallback != nil {
rc.AdvertiseCallback()
}
return nil return nil
} }
@ -40,9 +30,6 @@ func (rc *RouteCollector) UnadvertiseRoute(toRemove ...netip.Prefix) error {
rc.removedRoutes = append(rc.removedRoutes, r) rc.removedRoutes = append(rc.removedRoutes, r)
} }
} }
if rc.UnadvertiseCallback != nil {
rc.UnadvertiseCallback()
}
return nil return nil
} }

@ -1,110 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"net/netip"
"sync"
"tailscale.com/tailcfg"
)
// Conn25 holds the developing state for the as yet nascent next generation app connector.
// There is currently (2025-12-08) no actual app connecting functionality.
type Conn25 struct {
mu sync.Mutex
transitIPs map[tailcfg.NodeID]map[netip.Addr]netip.Addr
}
const dupeTransitIPMessage = "Duplicate transit address in ConnectorTransitIPRequest"
// HandleConnectorTransitIPRequest creates a ConnectorTransitIPResponse in response to a ConnectorTransitIPRequest.
// It updates the connectors mapping of TransitIP->DestinationIP per peer (tailcfg.NodeID).
// If a peer has stored this mapping in the connector Conn25 will route traffic to TransitIPs to DestinationIPs for that peer.
func (c *Conn25) HandleConnectorTransitIPRequest(nid tailcfg.NodeID, ctipr ConnectorTransitIPRequest) ConnectorTransitIPResponse {
resp := ConnectorTransitIPResponse{}
seen := map[netip.Addr]bool{}
for _, each := range ctipr.TransitIPs {
if seen[each.TransitIP] {
resp.TransitIPs = append(resp.TransitIPs, TransitIPResponse{
Code: OtherFailure,
Message: dupeTransitIPMessage,
})
continue
}
tipresp := c.handleTransitIPRequest(nid, each)
seen[each.TransitIP] = true
resp.TransitIPs = append(resp.TransitIPs, tipresp)
}
return resp
}
func (c *Conn25) handleTransitIPRequest(nid tailcfg.NodeID, tipr TransitIPRequest) TransitIPResponse {
c.mu.Lock()
defer c.mu.Unlock()
if c.transitIPs == nil {
c.transitIPs = make(map[tailcfg.NodeID]map[netip.Addr]netip.Addr)
}
peerMap, ok := c.transitIPs[nid]
if !ok {
peerMap = make(map[netip.Addr]netip.Addr)
c.transitIPs[nid] = peerMap
}
peerMap[tipr.TransitIP] = tipr.DestinationIP
return TransitIPResponse{}
}
func (c *Conn25) transitIPTarget(nid tailcfg.NodeID, tip netip.Addr) netip.Addr {
c.mu.Lock()
defer c.mu.Unlock()
return c.transitIPs[nid][tip]
}
// TransitIPRequest details a single TransitIP allocation request from a client to a
// connector.
type TransitIPRequest struct {
// TransitIP is the intermediate destination IP that will be received at this
// connector and will be replaced by DestinationIP when performing DNAT.
TransitIP netip.Addr `json:"transitIP,omitzero"`
// DestinationIP is the final destination IP that connections to the TransitIP
// should be mapped to when performing DNAT.
DestinationIP netip.Addr `json:"destinationIP,omitzero"`
}
// ConnectorTransitIPRequest is the request body for a PeerAPI request to
// /connector/transit-ip and can include zero or more TransitIP allocation requests.
type ConnectorTransitIPRequest struct {
// TransitIPs is the list of requested mappings.
TransitIPs []TransitIPRequest `json:"transitIPs,omitempty"`
}
// TransitIPResponseCode appears in TransitIPResponse and signifies success or failure status.
type TransitIPResponseCode int
const (
// OK indicates that the mapping was created as requested.
OK TransitIPResponseCode = 0
// OtherFailure indicates that the mapping failed for a reason that does not have
// another relevant [TransitIPResponsecode].
OtherFailure TransitIPResponseCode = 1
)
// TransitIPResponse is the response to a TransitIPRequest
type TransitIPResponse struct {
// Code is an error code indicating success or failure of the [TransitIPRequest].
Code TransitIPResponseCode `json:"code,omitzero"`
// Message is an error message explaining what happened, suitable for logging but
// not necessarily suitable for displaying in a UI to non-technical users. It
// should be empty when [Code] is [OK].
Message string `json:"message,omitzero"`
}
// ConnectorTransitIPResponse is the response to a ConnectorTransitIPRequest
type ConnectorTransitIPResponse struct {
// TransitIPs is the list of outcomes for each requested mapping. Elements
// correspond to the order of [ConnectorTransitIPRequest.TransitIPs].
TransitIPs []TransitIPResponse `json:"transitIPs,omitempty"`
}

@ -1,188 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"net/netip"
"testing"
"tailscale.com/tailcfg"
)
// TestHandleConnectorTransitIPRequestZeroLength tests that if sent a
// ConnectorTransitIPRequest with 0 TransitIPRequests, we respond with a
// ConnectorTransitIPResponse with 0 TransitIPResponses.
func TestHandleConnectorTransitIPRequestZeroLength(t *testing.T) {
c := &Conn25{}
req := ConnectorTransitIPRequest{}
nid := tailcfg.NodeID(1)
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 0 {
t.Fatalf("n TransitIPs in response: %d, want 0", len(resp.TransitIPs))
}
}
// TestHandleConnectorTransitIPRequestStoresAddr tests that if sent a
// request with a transit addr and a destination addr we store that mapping
// and can retrieve it. If sent another req with a different dst for that transit addr
// we store that instead.
func TestHandleConnectorTransitIPRequestStoresAddr(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
mr := func(t, d netip.Addr) ConnectorTransitIPRequest {
return ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: t, DestinationIP: d},
},
}
}
resp := c.HandleConnectorTransitIPRequest(nid, mr(tip, dip))
if len(resp.TransitIPs) != 1 {
t.Fatalf("n TransitIPs in response: %d, want 1", len(resp.TransitIPs))
}
got := resp.TransitIPs[0].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("TransitIP Code: %d, want 0", got)
}
gotAddr := c.transitIPTarget(nid, tip)
if gotAddr != dip {
t.Fatalf("Connector stored destination for tip: %v, want %v", gotAddr, dip)
}
// mapping can be overwritten
resp2 := c.HandleConnectorTransitIPRequest(nid, mr(tip, dip2))
if len(resp2.TransitIPs) != 1 {
t.Fatalf("n TransitIPs in response: %d, want 1", len(resp2.TransitIPs))
}
got2 := resp.TransitIPs[0].Code
if got2 != TransitIPResponseCode(0) {
t.Fatalf("TransitIP Code: %d, want 0", got2)
}
gotAddr2 := c.transitIPTarget(nid, tip)
if gotAddr2 != dip2 {
t.Fatalf("Connector stored destination for tip: %v, want %v", gotAddr, dip2)
}
}
// TestHandleConnectorTransitIPRequestMultipleTIP tests that we can
// get a req with multiple mappings and we store them all. Including
// multiple transit addrs for the same destination.
func TestHandleConnectorTransitIPRequestMultipleTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
tip2 := netip.MustParseAddr("0.0.0.2")
tip3 := netip.MustParseAddr("0.0.0.3")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
req := ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: tip, DestinationIP: dip},
{TransitIP: tip2, DestinationIP: dip2},
// can store same dst addr for multiple transit addrs
{TransitIP: tip3, DestinationIP: dip},
},
}
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 3 {
t.Fatalf("n TransitIPs in response: %d, want 3", len(resp.TransitIPs))
}
for i := 0; i < 3; i++ {
got := resp.TransitIPs[i].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("i=%d TransitIP Code: %d, want 0", i, got)
}
}
gotAddr1 := c.transitIPTarget(nid, tip)
if gotAddr1 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip, gotAddr1, dip)
}
gotAddr2 := c.transitIPTarget(nid, tip2)
if gotAddr2 != dip2 {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip2, gotAddr2, dip2)
}
gotAddr3 := c.transitIPTarget(nid, tip3)
if gotAddr3 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip3, gotAddr3, dip)
}
}
// TestHandleConnectorTransitIPRequestSameTIP tests that if we get
// a req that has more than one TransitIPRequest for the same transit addr
// only the first is stored, and the subsequent ones get an error code and
// message in the response.
func TestHandleConnectorTransitIPRequestSameTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
tip2 := netip.MustParseAddr("0.0.0.2")
dip := netip.MustParseAddr("1.2.3.4")
dip2 := netip.MustParseAddr("1.2.3.5")
dip3 := netip.MustParseAddr("1.2.3.6")
req := ConnectorTransitIPRequest{
TransitIPs: []TransitIPRequest{
{TransitIP: tip, DestinationIP: dip},
// cannot have dupe TransitIPs in one ConnectorTransitIPRequest
{TransitIP: tip, DestinationIP: dip2},
{TransitIP: tip2, DestinationIP: dip3},
},
}
resp := c.HandleConnectorTransitIPRequest(nid, req)
if len(resp.TransitIPs) != 3 {
t.Fatalf("n TransitIPs in response: %d, want 3", len(resp.TransitIPs))
}
got := resp.TransitIPs[0].Code
if got != TransitIPResponseCode(0) {
t.Fatalf("i=0 TransitIP Code: %d, want 0", got)
}
msg := resp.TransitIPs[0].Message
if msg != "" {
t.Fatalf("i=0 TransitIP Message: \"%s\", want \"%s\"", msg, "")
}
got1 := resp.TransitIPs[1].Code
if got1 != TransitIPResponseCode(1) {
t.Fatalf("i=1 TransitIP Code: %d, want 1", got1)
}
msg1 := resp.TransitIPs[1].Message
if msg1 != dupeTransitIPMessage {
t.Fatalf("i=1 TransitIP Message: \"%s\", want \"%s\"", msg1, dupeTransitIPMessage)
}
got2 := resp.TransitIPs[2].Code
if got2 != TransitIPResponseCode(0) {
t.Fatalf("i=2 TransitIP Code: %d, want 0", got2)
}
msg2 := resp.TransitIPs[2].Message
if msg2 != "" {
t.Fatalf("i=2 TransitIP Message: \"%s\", want \"%s\"", msg, "")
}
gotAddr1 := c.transitIPTarget(nid, tip)
if gotAddr1 != dip {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip, gotAddr1, dip)
}
gotAddr2 := c.transitIPTarget(nid, tip2)
if gotAddr2 != dip3 {
t.Fatalf("Connector stored destination for tip(%v): %v, want %v", tip2, gotAddr2, dip3)
}
}
// TestGetDstIPUnknownTIP tests that unknown transit addresses can be looked up without problem.
func TestTransitIPTargetUnknownTIP(t *testing.T) {
c := &Conn25{}
nid := tailcfg.NodeID(1)
tip := netip.MustParseAddr("0.0.0.1")
got := c.transitIPTarget(nid, tip)
want := netip.Addr{}
if got != want {
t.Fatalf("Unknown transit addr, want: %v, got %v", want, got)
}
}

@ -1,61 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"errors"
"net/netip"
"go4.org/netipx"
)
// errPoolExhausted is returned when there are no more addresses to iterate over.
var errPoolExhausted = errors.New("ip pool exhausted")
// ippool allows for iteration over all the addresses within a netipx.IPSet.
// netipx.IPSet has a Ranges call that returns the "minimum and sorted set of IP ranges that covers [the set]".
// netipx.IPRange is "an inclusive range of IP addresses from the same address family.". So we can iterate over
// all the addresses in the set by keeping a track of the last address we returned, calling Next on the last address
// to get the new one, and if we run off the edge of the current range, starting on the next one.
type ippool struct {
// ranges defines the addresses in the pool
ranges []netipx.IPRange
// last is internal tracking of which the last address provided was.
last netip.Addr
// rangeIdx is internal tracking of which netipx.IPRange from the IPSet we are currently on.
rangeIdx int
}
func newIPPool(ipset *netipx.IPSet) *ippool {
if ipset == nil {
return &ippool{}
}
return &ippool{ranges: ipset.Ranges()}
}
// next returns the next address from the set, or errPoolExhausted if we have
// iterated over the whole set.
func (ipp *ippool) next() (netip.Addr, error) {
if ipp.rangeIdx >= len(ipp.ranges) {
// ipset is empty or we have iterated off the end
return netip.Addr{}, errPoolExhausted
}
if !ipp.last.IsValid() {
// not initialized yet
ipp.last = ipp.ranges[0].From()
return ipp.last, nil
}
currRange := ipp.ranges[ipp.rangeIdx]
if ipp.last == currRange.To() {
// then we need to move to the next range
ipp.rangeIdx++
if ipp.rangeIdx >= len(ipp.ranges) {
return netip.Addr{}, errPoolExhausted
}
ipp.last = ipp.ranges[ipp.rangeIdx].From()
return ipp.last, nil
}
ipp.last = ipp.last.Next()
return ipp.last, nil
}

@ -1,60 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package appc
import (
"errors"
"net/netip"
"testing"
"go4.org/netipx"
"tailscale.com/util/must"
)
func TestNext(t *testing.T) {
a := ippool{}
_, err := a.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
var isb netipx.IPSetBuilder
ipset := must.Get(isb.IPSet())
b := newIPPool(ipset)
_, err = b.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("192.168.0.0"), netip.MustParseAddr("192.168.0.2")))
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("200.0.0.0"), netip.MustParseAddr("200.0.0.0")))
isb.AddRange(netipx.IPRangeFrom(netip.MustParseAddr("201.0.0.0"), netip.MustParseAddr("201.0.0.1")))
ipset = must.Get(isb.IPSet())
c := newIPPool(ipset)
expected := []string{
"192.168.0.0",
"192.168.0.1",
"192.168.0.2",
"200.0.0.0",
"201.0.0.0",
"201.0.0.1",
}
for i, want := range expected {
addr, err := c.next()
if err != nil {
t.Fatal(err)
}
if addr != netip.MustParseAddr(want) {
t.Fatalf("next call %d want: %s, got: %v", i, want, addr)
}
}
_, err = c.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
_, err = c.next()
if !errors.Is(err, errPoolExhausted) {
t.Fatalf("expected errPoolExhausted, got %v", err)
}
}

@ -1,132 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_appconnectors
package appc
import (
"net/netip"
"strings"
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/util/mak"
)
// ObserveDNSResponse is a callback invoked by the DNS resolver when a DNS
// response is being returned over the PeerAPI. The response is parsed and
// matched against the configured domains, if matched the routeAdvertiser is
// advised to advertise the discovered route.
func (e *AppConnector) ObserveDNSResponse(res []byte) error {
var p dnsmessage.Parser
if _, err := p.Start(res); err != nil {
return err
}
if err := p.SkipAllQuestions(); err != nil {
return err
}
// cnameChain tracks a chain of CNAMEs for a given query in order to reverse
// a CNAME chain back to the original query for flattening. The keys are
// CNAME record targets, and the value is the name the record answers, so
// for www.example.com CNAME example.com, the map would contain
// ["example.com"] = "www.example.com".
var cnameChain map[string]string
// addressRecords is a list of address records found in the response.
var addressRecords map[string][]netip.Addr
for {
h, err := p.AnswerHeader()
if err == dnsmessage.ErrSectionDone {
break
}
if err != nil {
return err
}
if h.Class != dnsmessage.ClassINET {
if err := p.SkipAnswer(); err != nil {
return err
}
continue
}
switch h.Type {
case dnsmessage.TypeCNAME, dnsmessage.TypeA, dnsmessage.TypeAAAA:
default:
if err := p.SkipAnswer(); err != nil {
return err
}
continue
}
domain := strings.TrimSuffix(strings.ToLower(h.Name.String()), ".")
if len(domain) == 0 {
continue
}
if h.Type == dnsmessage.TypeCNAME {
res, err := p.CNAMEResource()
if err != nil {
return err
}
cname := strings.TrimSuffix(strings.ToLower(res.CNAME.String()), ".")
if len(cname) == 0 {
continue
}
mak.Set(&cnameChain, cname, domain)
continue
}
switch h.Type {
case dnsmessage.TypeA:
r, err := p.AResource()
if err != nil {
return err
}
addr := netip.AddrFrom4(r.A)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
case dnsmessage.TypeAAAA:
r, err := p.AAAAResource()
if err != nil {
return err
}
addr := netip.AddrFrom16(r.AAAA)
mak.Set(&addressRecords, domain, append(addressRecords[domain], addr))
default:
if err := p.SkipAnswer(); err != nil {
return err
}
continue
}
}
e.mu.Lock()
defer e.mu.Unlock()
for domain, addrs := range addressRecords {
domain, isRouted := e.findRoutedDomainLocked(domain, cnameChain)
// domain and none of the CNAMEs in the chain are routed
if !isRouted {
continue
}
// advertise each address we have learned for the routed domain, that
// was not already known.
var toAdvertise []netip.Prefix
for _, addr := range addrs {
if !e.isAddrKnownLocked(domain, addr) {
toAdvertise = append(toAdvertise, netip.PrefixFrom(addr, addr.BitLen()))
}
}
if len(toAdvertise) > 0 {
e.logf("[v2] observed new routes for %s: %s", domain, toAdvertise)
e.scheduleAdvertisement(domain, toAdvertise...)
}
}
return nil
}

@ -1,8 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build ts_omit_appconnectors
package appc
func (e *AppConnector) ObserveDNSResponse(res []byte) error { return nil }

@ -15,9 +15,8 @@ import (
) )
// WriteFile writes data to filename+some suffix, then renames it into filename. // WriteFile writes data to filename+some suffix, then renames it into filename.
// The perm argument is ignored on Windows, but if the target filename already // The perm argument is ignored on Windows. If the target filename already
// exists then the target file's attributes and ACLs are preserved. If the target // exists but is not a regular file, WriteFile returns an error.
// filename already exists but is not a regular file, WriteFile returns an error.
func WriteFile(filename string, data []byte, perm os.FileMode) (err error) { func WriteFile(filename string, data []byte, perm os.FileMode) (err error) {
fi, err := os.Stat(filename) fi, err := os.Stat(filename)
if err == nil && !fi.Mode().IsRegular() { if err == nil && !fi.Mode().IsRegular() {
@ -48,9 +47,5 @@ func WriteFile(filename string, data []byte, perm os.FileMode) (err error) {
if err := f.Close(); err != nil { if err := f.Close(); err != nil {
return err return err
} }
return Rename(tmpName, filename) return os.Rename(tmpName, filename)
} }
// Rename srcFile to dstFile, similar to [os.Rename] but preserving file
// attributes and ACLs on Windows.
func Rename(srcFile, dstFile string) error { return rename(srcFile, dstFile) }

@ -1,14 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !windows
package atomicfile
import (
"os"
)
func rename(srcFile, destFile string) error {
return os.Rename(srcFile, destFile)
}

@ -31,11 +31,11 @@ func TestDoesNotOverwriteIrregularFiles(t *testing.T) {
// The least troublesome thing to make that is not a file is a unix socket. // The least troublesome thing to make that is not a file is a unix socket.
// Making a null device sadly requires root. // Making a null device sadly requires root.
ln, err := net.ListenUnix("unix", &net.UnixAddr{Name: path, Net: "unix"}) l, err := net.ListenUnix("unix", &net.UnixAddr{Name: path, Net: "unix"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
defer ln.Close() defer l.Close()
err = WriteFile(path, []byte("hello"), 0644) err = WriteFile(path, []byte("hello"), 0644)
if err == nil { if err == nil {

@ -1,33 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile
import (
"os"
"golang.org/x/sys/windows"
)
func rename(srcFile, destFile string) error {
// Use replaceFile when possible to preserve the original file's attributes and ACLs.
if err := replaceFile(destFile, srcFile); err == nil || err != windows.ERROR_FILE_NOT_FOUND {
return err
}
// destFile doesn't exist. Just do a normal rename.
return os.Rename(srcFile, destFile)
}
func replaceFile(destFile, srcFile string) error {
destFile16, err := windows.UTF16PtrFromString(destFile)
if err != nil {
return err
}
srcFile16, err := windows.UTF16PtrFromString(srcFile)
if err != nil {
return err
}
return replaceFileW(destFile16, srcFile16, nil, 0, nil, nil)
}

@ -1,146 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile
import (
"os"
"testing"
"unsafe"
"golang.org/x/sys/windows"
)
var _SECURITY_RESOURCE_MANAGER_AUTHORITY = windows.SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 9}}
// makeRandomSID generates a SID derived from a v4 GUID.
// This is basically the same algorithm used by browser sandboxes for generating
// random SIDs.
func makeRandomSID() (*windows.SID, error) {
guid, err := windows.GenerateGUID()
if err != nil {
return nil, err
}
rids := *((*[4]uint32)(unsafe.Pointer(&guid)))
var pSID *windows.SID
if err := windows.AllocateAndInitializeSid(&_SECURITY_RESOURCE_MANAGER_AUTHORITY, 4, rids[0], rids[1], rids[2], rids[3], 0, 0, 0, 0, &pSID); err != nil {
return nil, err
}
defer windows.FreeSid(pSID)
// Make a copy that lives on the Go heap
return pSID.Copy()
}
func getExistingFileSD(name string) (*windows.SECURITY_DESCRIPTOR, error) {
const infoFlags = windows.DACL_SECURITY_INFORMATION
return windows.GetNamedSecurityInfo(name, windows.SE_FILE_OBJECT, infoFlags)
}
func getExistingFileDACL(name string) (*windows.ACL, error) {
sd, err := getExistingFileSD(name)
if err != nil {
return nil, err
}
dacl, _, err := sd.DACL()
return dacl, err
}
func addDenyACEForRandomSID(dacl *windows.ACL) (*windows.ACL, error) {
randomSID, err := makeRandomSID()
if err != nil {
return nil, err
}
randomSIDTrustee := windows.TRUSTEE{nil, windows.NO_MULTIPLE_TRUSTEE,
windows.TRUSTEE_IS_SID, windows.TRUSTEE_IS_UNKNOWN,
windows.TrusteeValueFromSID(randomSID)}
entries := []windows.EXPLICIT_ACCESS{
{
windows.GENERIC_ALL,
windows.DENY_ACCESS,
windows.NO_INHERITANCE,
randomSIDTrustee,
},
}
return windows.ACLFromEntries(entries, dacl)
}
func setExistingFileDACL(name string, dacl *windows.ACL) error {
return windows.SetNamedSecurityInfo(name, windows.SE_FILE_OBJECT,
windows.DACL_SECURITY_INFORMATION, nil, nil, dacl, nil)
}
// makeOrigFileWithCustomDACL creates a new, temporary file with a custom
// DACL that we can check for later. It returns the name of the temporary
// file and the security descriptor for the file in SDDL format.
func makeOrigFileWithCustomDACL() (name, sddl string, err error) {
f, err := os.CreateTemp("", "foo*.tmp")
if err != nil {
return "", "", err
}
name = f.Name()
if err := f.Close(); err != nil {
return "", "", err
}
f = nil
defer func() {
if err != nil {
os.Remove(name)
}
}()
dacl, err := getExistingFileDACL(name)
if err != nil {
return "", "", err
}
// Add a harmless, deny-only ACE for a random SID that isn't used for anything
// (but that we can check for later).
dacl, err = addDenyACEForRandomSID(dacl)
if err != nil {
return "", "", err
}
if err := setExistingFileDACL(name, dacl); err != nil {
return "", "", err
}
sd, err := getExistingFileSD(name)
if err != nil {
return "", "", err
}
return name, sd.String(), nil
}
func TestPreserveSecurityInfo(t *testing.T) {
// Make a test file with a custom ACL.
origFileName, want, err := makeOrigFileWithCustomDACL()
if err != nil {
t.Fatalf("makeOrigFileWithCustomDACL returned %v", err)
}
t.Cleanup(func() {
os.Remove(origFileName)
})
if err := WriteFile(origFileName, []byte{}, 0); err != nil {
t.Fatalf("WriteFile returned %v", err)
}
// We expect origFileName's security descriptor to be unchanged despite
// the WriteFile call.
sd, err := getExistingFileSD(origFileName)
if err != nil {
t.Fatalf("getExistingFileSD(%q) returned %v", origFileName, err)
}
if got := sd.String(); got != want {
t.Errorf("security descriptor comparison failed: got %q, want %q", got, want)
}
}

@ -1,8 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package atomicfile
//go:generate go run golang.org/x/sys/windows/mkwinsyscall -output zsyscall_windows.go mksyscall.go
//sys replaceFileW(replaced *uint16, replacement *uint16, backup *uint16, flags uint32, exclude unsafe.Pointer, reserved unsafe.Pointer) (err error) [int32(failretval)==0] = kernel32.ReplaceFileW

@ -1,52 +0,0 @@
// Code generated by 'go generate'; DO NOT EDIT.
package atomicfile
import (
"syscall"
"unsafe"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
// Do the interface allocations only once for common
// Errno values.
const (
errnoERROR_IO_PENDING = 997
)
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
errERROR_EINVAL error = syscall.EINVAL
)
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return errERROR_EINVAL
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
// TODO: add more here, after collecting data on the common
// error values see on Windows. (perhaps when running
// all.bat?)
return e
}
var (
modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
procReplaceFileW = modkernel32.NewProc("ReplaceFileW")
)
func replaceFileW(replaced *uint16, replacement *uint16, backup *uint16, flags uint32, exclude unsafe.Pointer, reserved unsafe.Pointer) (err error) {
r1, _, e1 := syscall.SyscallN(procReplaceFileW.Addr(), uintptr(unsafe.Pointer(replaced)), uintptr(unsafe.Pointer(replacement)), uintptr(unsafe.Pointer(backup)), uintptr(flags), uintptr(exclude), uintptr(reserved))
if int32(r1) == 0 {
err = errnoErr(e1)
}
return
}

@ -18,7 +18,7 @@ fi
eval `CGO_ENABLED=0 GOOS=$($go env GOHOSTOS) GOARCH=$($go env GOHOSTARCH) $go run ./cmd/mkversion` eval `CGO_ENABLED=0 GOOS=$($go env GOHOSTOS) GOARCH=$($go env GOHOSTARCH) $go run ./cmd/mkversion`
if [ "$#" -ge 1 ] && [ "$1" = "shellvars" ]; then if [ "$1" = "shellvars" ]; then
cat <<EOF cat <<EOF
VERSION_MINOR="$VERSION_MINOR" VERSION_MINOR="$VERSION_MINOR"
VERSION_SHORT="$VERSION_SHORT" VERSION_SHORT="$VERSION_SHORT"
@ -28,34 +28,18 @@ EOF
exit 0 exit 0
fi fi
tags="${TAGS:-}" tags=""
ldflags="-X tailscale.com/version.longStamp=${VERSION_LONG} -X tailscale.com/version.shortStamp=${VERSION_SHORT}" ldflags="-X tailscale.com/version.longStamp=${VERSION_LONG} -X tailscale.com/version.shortStamp=${VERSION_SHORT}"
# build_dist.sh arguments must precede go build arguments. # build_dist.sh arguments must precede go build arguments.
while [ "$#" -gt 1 ]; do while [ "$#" -gt 1 ]; do
case "$1" in case "$1" in
--extra-small) --extra-small)
if [ ! -z "${TAGS:-}" ]; then
echo "set either --extra-small or \$TAGS, but not both"
exit 1
fi
shift shift
ldflags="$ldflags -w -s" ldflags="$ldflags -w -s"
tags="${tags:+$tags,},$(GOOS= GOARCH= $go run ./cmd/featuretags --min --add=osrouter)" tags="${tags:+$tags,}ts_omit_aws,ts_omit_bird,ts_omit_tap,ts_omit_kube,ts_omit_completion"
;;
--min)
# --min is like --extra-small but even smaller, removing all features,
# even if it results in a useless binary (e.g. removing both netstack +
# osrouter). It exists for benchmarking purposes only.
shift
ldflags="$ldflags -w -s"
tags="${tags:+$tags,},$(GOOS= GOARCH= $go run ./cmd/featuretags --min)"
;; ;;
--box) --box)
if [ ! -z "${TAGS:-}" ]; then
echo "set either --box or \$TAGS, but not both"
exit 1
fi
shift shift
tags="${tags:+$tags,}ts_include_cli" tags="${tags:+$tags,}ts_include_cli"
;; ;;
@ -65,4 +49,4 @@ while [ "$#" -gt 1 ]; do
esac esac
done done
exec $go build ${tags:+-tags=$tags} -trimpath -ldflags "$ldflags" "$@" exec $go build ${tags:+-tags=$tags} -ldflags "$ldflags" "$@"

@ -6,16 +6,6 @@
# hash of this repository as produced by ./cmd/mkversion. # hash of this repository as produced by ./cmd/mkversion.
# This is the image build mechanim used to build the official Tailscale # This is the image build mechanim used to build the official Tailscale
# container images. # container images.
#
# If you want to build local images for testing, you can use make, which provides few convenience wrappers around this script.
#
# To build a Tailscale image and push to the local docker registry:
# $ REPO=local/tailscale TAGS=v0.0.1 PLATFORM=local make publishdevimage
#
# To build a Tailscale image and push to a remote docker registry:
#
# $ REPO=<your-registry>/<your-repo>/tailscale TAGS=v0.0.1 make publishdevimage
set -eu set -eu
@ -26,7 +16,7 @@ eval "$(./build_dist.sh shellvars)"
DEFAULT_TARGET="client" DEFAULT_TARGET="client"
DEFAULT_TAGS="v${VERSION_SHORT},v${VERSION_MINOR}" DEFAULT_TAGS="v${VERSION_SHORT},v${VERSION_MINOR}"
DEFAULT_BASE="tailscale/alpine-base:3.22" DEFAULT_BASE="tailscale/alpine-base:3.18"
# Set a few pre-defined OCI annotations. The source annotation is used by tools such as Renovate that scan the linked # Set a few pre-defined OCI annotations. The source annotation is used by tools such as Renovate that scan the linked
# Github repo to find release notes for any new image tags. Note that for official Tailscale images the default # Github repo to find release notes for any new image tags. Note that for official Tailscale images the default
# annotations defined here will be overriden by release scripts that call this script. # annotations defined here will be overriden by release scripts that call this script.
@ -38,7 +28,6 @@ TARGET="${TARGET:-${DEFAULT_TARGET}}"
TAGS="${TAGS:-${DEFAULT_TAGS}}" TAGS="${TAGS:-${DEFAULT_TAGS}}"
BASE="${BASE:-${DEFAULT_BASE}}" BASE="${BASE:-${DEFAULT_BASE}}"
PLATFORM="${PLATFORM:-}" # default to all platforms PLATFORM="${PLATFORM:-}" # default to all platforms
FILES="${FILES:-}" # default to no extra files
# OCI annotations that will be added to the image. # OCI annotations that will be added to the image.
# https://github.com/opencontainers/image-spec/blob/main/annotations.md # https://github.com/opencontainers/image-spec/blob/main/annotations.md
ANNOTATIONS="${ANNOTATIONS:-${DEFAULT_ANNOTATIONS}}" ANNOTATIONS="${ANNOTATIONS:-${DEFAULT_ANNOTATIONS}}"
@ -63,7 +52,6 @@ case "$TARGET" in
--push="${PUSH}" \ --push="${PUSH}" \
--target="${PLATFORM}" \ --target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \ --annotations="${ANNOTATIONS}" \
--files="${FILES}" \
/usr/local/bin/containerboot /usr/local/bin/containerboot
;; ;;
k8s-operator) k8s-operator)
@ -82,7 +70,6 @@ case "$TARGET" in
--push="${PUSH}" \ --push="${PUSH}" \
--target="${PLATFORM}" \ --target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \ --annotations="${ANNOTATIONS}" \
--files="${FILES}" \
/usr/local/bin/operator /usr/local/bin/operator
;; ;;
k8s-nameserver) k8s-nameserver)
@ -101,47 +88,8 @@ case "$TARGET" in
--push="${PUSH}" \ --push="${PUSH}" \
--target="${PLATFORM}" \ --target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \ --annotations="${ANNOTATIONS}" \
--files="${FILES}" \
/usr/local/bin/k8s-nameserver /usr/local/bin/k8s-nameserver
;; ;;
tsidp)
DEFAULT_REPOS="tailscale/tsidp"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
--gopaths="tailscale.com/cmd/tsidp:/usr/local/bin/tsidp" \
--ldflags=" \
-X tailscale.com/version.longStamp=${VERSION_LONG} \
-X tailscale.com/version.shortStamp=${VERSION_SHORT} \
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_package_container" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \
--files="${FILES}" \
/usr/local/bin/tsidp
;;
k8s-proxy)
DEFAULT_REPOS="tailscale/k8s-proxy"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
--gopaths="tailscale.com/cmd/k8s-proxy:/usr/local/bin/k8s-proxy" \
--ldflags=" \
-X tailscale.com/version.longStamp=${VERSION_LONG} \
-X tailscale.com/version.shortStamp=${VERSION_SHORT} \
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_kube,ts_package_container" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \
--files="${FILES}" \
/usr/local/bin/k8s-proxy
;;
*) *)
echo "unknown target: $TARGET" echo "unknown target: $TARGET"
exit 1 exit 1

@ -1,6 +1,5 @@
// Copyright (c) Tailscale Inc & AUTHORS // Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause // SPDX-License-Identifier: BSD-3-Clause
package chirp package chirp
import ( import (
@ -24,7 +23,7 @@ type fakeBIRD struct {
func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD { func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD {
sock := filepath.Join(t.TempDir(), "sock") sock := filepath.Join(t.TempDir(), "sock")
ln, err := net.Listen("unix", sock) l, err := net.Listen("unix", sock)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -33,7 +32,7 @@ func newFakeBIRD(t *testing.T, protocols ...string) *fakeBIRD {
pe[p] = false pe[p] = false
} }
return &fakeBIRD{ return &fakeBIRD{
Listener: ln, Listener: l,
protocolsEnabled: pe, protocolsEnabled: pe,
sock: sock, sock: sock,
} }
@ -123,12 +122,12 @@ type hangingListener struct {
func newHangingListener(t *testing.T) *hangingListener { func newHangingListener(t *testing.T) *hangingListener {
sock := filepath.Join(t.TempDir(), "sock") sock := filepath.Join(t.TempDir(), "sock")
ln, err := net.Listen("unix", sock) l, err := net.Listen("unix", sock)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
return &hangingListener{ return &hangingListener{
Listener: ln, Listener: l,
t: t, t: t,
done: make(chan struct{}), done: make(chan struct{}),
sock: sock, sock: sock,

@ -1,151 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !ts_omit_acme
package local
import (
"context"
"crypto/tls"
"errors"
"fmt"
"net/url"
"strings"
"time"
"go4.org/mem"
)
// SetDNS adds a DNS TXT record for the given domain name, containing
// the provided TXT value. The intended use case is answering
// LetsEncrypt/ACME dns-01 challenges.
//
// The control plane will only permit SetDNS requests with very
// specific names and values. The name should be
// "_acme-challenge." + your node's MagicDNS name. It's expected that
// clients cache the certs from LetsEncrypt (or whichever CA is
// providing them) and only request new ones as needed; the control plane
// rate limits SetDNS requests.
//
// This is a low-level interface; it's expected that most Tailscale
// users use a higher level interface to getting/using TLS
// certificates.
func (lc *Client) SetDNS(ctx context.Context, name, value string) error {
v := url.Values{}
v.Set("name", name)
v.Set("value", value)
_, err := lc.send(ctx, "POST", "/localapi/v0/set-dns?"+v.Encode(), 200, nil)
return err
}
// CertPair returns a cert and private key for the provided DNS domain.
//
// It returns a cached certificate from disk if it's still valid.
//
// Deprecated: use [Client.CertPair].
func CertPair(ctx context.Context, domain string) (certPEM, keyPEM []byte, err error) {
return defaultClient.CertPair(ctx, domain)
}
// CertPair returns a cert and private key for the provided DNS domain.
//
// It returns a cached certificate from disk if it's still valid.
//
// API maturity: this is considered a stable API.
func (lc *Client) CertPair(ctx context.Context, domain string) (certPEM, keyPEM []byte, err error) {
return lc.CertPairWithValidity(ctx, domain, 0)
}
// CertPairWithValidity returns a cert and private key for the provided DNS
// domain.
//
// It returns a cached certificate from disk if it's still valid.
// When minValidity is non-zero, the returned certificate will be valid for at
// least the given duration, if permitted by the CA. If the certificate is
// valid, but for less than minValidity, it will be synchronously renewed.
//
// API maturity: this is considered a stable API.
func (lc *Client) CertPairWithValidity(ctx context.Context, domain string, minValidity time.Duration) (certPEM, keyPEM []byte, err error) {
res, err := lc.send(ctx, "GET", fmt.Sprintf("/localapi/v0/cert/%s?type=pair&min_validity=%s", domain, minValidity), 200, nil)
if err != nil {
return nil, nil, err
}
// with ?type=pair, the response PEM is first the one private
// key PEM block, then the cert PEM blocks.
i := mem.Index(mem.B(res), mem.S("--\n--"))
if i == -1 {
return nil, nil, fmt.Errorf("unexpected output: no delimiter")
}
i += len("--\n")
keyPEM, certPEM = res[:i], res[i:]
if mem.Contains(mem.B(certPEM), mem.S(" PRIVATE KEY-----")) {
return nil, nil, fmt.Errorf("unexpected output: key in cert")
}
return certPEM, keyPEM, nil
}
// GetCertificate fetches a TLS certificate for the TLS ClientHello in hi.
//
// It returns a cached certificate from disk if it's still valid.
//
// It's the right signature to use as the value of
// [tls.Config.GetCertificate].
//
// Deprecated: use [Client.GetCertificate].
func GetCertificate(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
return defaultClient.GetCertificate(hi)
}
// GetCertificate fetches a TLS certificate for the TLS ClientHello in hi.
//
// It returns a cached certificate from disk if it's still valid.
//
// It's the right signature to use as the value of
// [tls.Config.GetCertificate].
//
// API maturity: this is considered a stable API.
func (lc *Client) GetCertificate(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
if hi == nil || hi.ServerName == "" {
return nil, errors.New("no SNI ServerName")
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
name := hi.ServerName
if !strings.Contains(name, ".") {
if v, ok := lc.ExpandSNIName(ctx, name); ok {
name = v
}
}
certPEM, keyPEM, err := lc.CertPair(ctx, name)
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
return nil, err
}
return &cert, nil
}
// ExpandSNIName expands bare label name into the most likely actual TLS cert name.
//
// Deprecated: use [Client.ExpandSNIName].
func ExpandSNIName(ctx context.Context, name string) (fqdn string, ok bool) {
return defaultClient.ExpandSNIName(ctx, name)
}
// ExpandSNIName expands bare label name into the most likely actual TLS cert name.
func (lc *Client) ExpandSNIName(ctx context.Context, name string) (fqdn string, ok bool) {
st, err := lc.StatusWithoutPeers(ctx)
if err != nil {
return "", false
}
for _, d := range st.CertDomains {
if len(d) > len(name)+1 && strings.HasPrefix(d, name) && d[len(name)] == '.' {
return d, true
}
}
return "", false
}

@ -1,84 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_debugportmapper
package local
import (
"cmp"
"context"
"fmt"
"io"
"net/http"
"net/netip"
"net/url"
"strconv"
"time"
"tailscale.com/client/tailscale/apitype"
)
// DebugPortmapOpts contains options for the [Client.DebugPortmap] command.
type DebugPortmapOpts struct {
// Duration is how long the mapping should be created for. It defaults
// to 5 seconds if not set.
Duration time.Duration
// Type is the kind of portmap to debug. The empty string instructs the
// portmap client to perform all known types. Other valid options are
// "pmp", "pcp", and "upnp".
Type string
// GatewayAddr specifies the gateway address used during portmapping.
// If set, SelfAddr must also be set. If unset, it will be
// autodetected.
GatewayAddr netip.Addr
// SelfAddr specifies the gateway address used during portmapping. If
// set, GatewayAddr must also be set. If unset, it will be
// autodetected.
SelfAddr netip.Addr
// LogHTTP instructs the debug-portmap endpoint to print all HTTP
// requests and responses made to the logs.
LogHTTP bool
}
// DebugPortmap invokes the debug-portmap endpoint, and returns an
// io.ReadCloser that can be used to read the logs that are printed during this
// process.
//
// opts can be nil; if so, default values will be used.
func (lc *Client) DebugPortmap(ctx context.Context, opts *DebugPortmapOpts) (io.ReadCloser, error) {
vals := make(url.Values)
if opts == nil {
opts = &DebugPortmapOpts{}
}
vals.Set("duration", cmp.Or(opts.Duration, 5*time.Second).String())
vals.Set("type", opts.Type)
vals.Set("log_http", strconv.FormatBool(opts.LogHTTP))
if opts.GatewayAddr.IsValid() != opts.SelfAddr.IsValid() {
return nil, fmt.Errorf("both GatewayAddr and SelfAddr must be provided if one is")
} else if opts.GatewayAddr.IsValid() {
vals.Set("gateway_and_self", fmt.Sprintf("%s/%s", opts.GatewayAddr, opts.SelfAddr))
}
req, err := http.NewRequestWithContext(ctx, "GET", "http://"+apitype.LocalAPIHost+"/localapi/v0/debug-portmap?"+vals.Encode(), nil)
if err != nil {
return nil, err
}
res, err := lc.doLocalRequestNiceError(req)
if err != nil {
return nil, err
}
if res.StatusCode != 200 {
body, _ := io.ReadAll(res.Body)
res.Body.Close()
return nil, fmt.Errorf("HTTP %s: %s", res.Status, body)
}
return res.Body, nil
}

@ -1,55 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_serve
package local
import (
"context"
"encoding/json"
"fmt"
"net/http"
"tailscale.com/ipn"
)
// GetServeConfig return the current serve config.
//
// If the serve config is empty, it returns (nil, nil).
func (lc *Client) GetServeConfig(ctx context.Context) (*ipn.ServeConfig, error) {
body, h, err := lc.sendWithHeaders(ctx, "GET", "/localapi/v0/serve-config", 200, nil, nil)
if err != nil {
return nil, fmt.Errorf("getting serve config: %w", err)
}
sc, err := getServeConfigFromJSON(body)
if err != nil {
return nil, err
}
if sc == nil {
sc = new(ipn.ServeConfig)
}
sc.ETag = h.Get("Etag")
return sc, nil
}
func getServeConfigFromJSON(body []byte) (sc *ipn.ServeConfig, err error) {
if err := json.Unmarshal(body, &sc); err != nil {
return nil, err
}
return sc, nil
}
// SetServeConfig sets or replaces the serving settings.
// If config is nil, settings are cleared and serving is disabled.
func (lc *Client) SetServeConfig(ctx context.Context, config *ipn.ServeConfig) error {
h := make(http.Header)
if config != nil {
h.Set("If-Match", config.ETag)
}
_, _, err := lc.sendWithHeaders(ctx, "POST", "/localapi/v0/serve-config", 200, jsonBody(config), h)
if err != nil {
return fmt.Errorf("sending serve config: %w", err)
}
return nil
}

@ -1,40 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_syspolicy
package local
import (
"context"
"net/http"
"tailscale.com/util/syspolicy/setting"
)
// GetEffectivePolicy returns the effective policy for the specified scope.
func (lc *Client) GetEffectivePolicy(ctx context.Context, scope setting.PolicyScope) (*setting.Snapshot, error) {
scopeID, err := scope.MarshalText()
if err != nil {
return nil, err
}
body, err := lc.get200(ctx, "/localapi/v0/policy/"+string(scopeID))
if err != nil {
return nil, err
}
return decodeJSON[*setting.Snapshot](body)
}
// ReloadEffectivePolicy reloads the effective policy for the specified scope
// by reading and merging policy settings from all applicable policy sources.
func (lc *Client) ReloadEffectivePolicy(ctx context.Context, scope setting.PolicyScope) (*setting.Snapshot, error) {
scopeID, err := scope.MarshalText()
if err != nil {
return nil, err
}
body, err := lc.send(ctx, "POST", "/localapi/v0/policy/"+string(scopeID), 200, http.NoBody)
if err != nil {
return nil, err
}
return decodeJSON[*setting.Snapshot](body)
}

@ -1,204 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !ts_omit_tailnetlock
package local
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/url"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tka"
"tailscale.com/types/key"
"tailscale.com/types/tkatype"
)
// NetworkLockStatus fetches information about the tailnet key authority, if one is configured.
func (lc *Client) NetworkLockStatus(ctx context.Context) (*ipnstate.NetworkLockStatus, error) {
body, err := lc.send(ctx, "GET", "/localapi/v0/tka/status", 200, nil)
if err != nil {
return nil, fmt.Errorf("error: %w", err)
}
return decodeJSON[*ipnstate.NetworkLockStatus](body)
}
// NetworkLockInit initializes the tailnet key authority.
//
// TODO(tom): Plumb through disablement secrets.
func (lc *Client) NetworkLockInit(ctx context.Context, keys []tka.Key, disablementValues [][]byte, supportDisablement []byte) (*ipnstate.NetworkLockStatus, error) {
var b bytes.Buffer
type initRequest struct {
Keys []tka.Key
DisablementValues [][]byte
SupportDisablement []byte
}
if err := json.NewEncoder(&b).Encode(initRequest{Keys: keys, DisablementValues: disablementValues, SupportDisablement: supportDisablement}); err != nil {
return nil, err
}
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/init", 200, &b)
if err != nil {
return nil, fmt.Errorf("error: %w", err)
}
return decodeJSON[*ipnstate.NetworkLockStatus](body)
}
// NetworkLockWrapPreauthKey wraps a pre-auth key with information to
// enable unattended bringup in the locked tailnet.
func (lc *Client) NetworkLockWrapPreauthKey(ctx context.Context, preauthKey string, tkaKey key.NLPrivate) (string, error) {
encodedPrivate, err := tkaKey.MarshalText()
if err != nil {
return "", err
}
var b bytes.Buffer
type wrapRequest struct {
TSKey string
TKAKey string // key.NLPrivate.MarshalText
}
if err := json.NewEncoder(&b).Encode(wrapRequest{TSKey: preauthKey, TKAKey: string(encodedPrivate)}); err != nil {
return "", err
}
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/wrap-preauth-key", 200, &b)
if err != nil {
return "", fmt.Errorf("error: %w", err)
}
return string(body), nil
}
// NetworkLockModify adds and/or removes key(s) to the tailnet key authority.
func (lc *Client) NetworkLockModify(ctx context.Context, addKeys, removeKeys []tka.Key) error {
var b bytes.Buffer
type modifyRequest struct {
AddKeys []tka.Key
RemoveKeys []tka.Key
}
if err := json.NewEncoder(&b).Encode(modifyRequest{AddKeys: addKeys, RemoveKeys: removeKeys}); err != nil {
return err
}
if _, err := lc.send(ctx, "POST", "/localapi/v0/tka/modify", 204, &b); err != nil {
return fmt.Errorf("error: %w", err)
}
return nil
}
// NetworkLockSign signs the specified node-key and transmits that signature to the control plane.
// rotationPublic, if specified, must be an ed25519 public key.
func (lc *Client) NetworkLockSign(ctx context.Context, nodeKey key.NodePublic, rotationPublic []byte) error {
var b bytes.Buffer
type signRequest struct {
NodeKey key.NodePublic
RotationPublic []byte
}
if err := json.NewEncoder(&b).Encode(signRequest{NodeKey: nodeKey, RotationPublic: rotationPublic}); err != nil {
return err
}
if _, err := lc.send(ctx, "POST", "/localapi/v0/tka/sign", 200, &b); err != nil {
return fmt.Errorf("error: %w", err)
}
return nil
}
// NetworkLockAffectedSigs returns all signatures signed by the specified keyID.
func (lc *Client) NetworkLockAffectedSigs(ctx context.Context, keyID tkatype.KeyID) ([]tkatype.MarshaledSignature, error) {
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/affected-sigs", 200, bytes.NewReader(keyID))
if err != nil {
return nil, fmt.Errorf("error: %w", err)
}
return decodeJSON[[]tkatype.MarshaledSignature](body)
}
// NetworkLockLog returns up to maxEntries number of changes to network-lock state.
func (lc *Client) NetworkLockLog(ctx context.Context, maxEntries int) ([]ipnstate.NetworkLockUpdate, error) {
v := url.Values{}
v.Set("limit", fmt.Sprint(maxEntries))
body, err := lc.send(ctx, "GET", "/localapi/v0/tka/log?"+v.Encode(), 200, nil)
if err != nil {
return nil, fmt.Errorf("error %w: %s", err, body)
}
return decodeJSON[[]ipnstate.NetworkLockUpdate](body)
}
// NetworkLockForceLocalDisable forcibly shuts down network lock on this node.
func (lc *Client) NetworkLockForceLocalDisable(ctx context.Context) error {
// This endpoint expects an empty JSON stanza as the payload.
var b bytes.Buffer
if err := json.NewEncoder(&b).Encode(struct{}{}); err != nil {
return err
}
if _, err := lc.send(ctx, "POST", "/localapi/v0/tka/force-local-disable", 200, &b); err != nil {
return fmt.Errorf("error: %w", err)
}
return nil
}
// NetworkLockVerifySigningDeeplink verifies the network lock deeplink contained
// in url and returns information extracted from it.
func (lc *Client) NetworkLockVerifySigningDeeplink(ctx context.Context, url string) (*tka.DeeplinkValidationResult, error) {
vr := struct {
URL string
}{url}
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/verify-deeplink", 200, jsonBody(vr))
if err != nil {
return nil, fmt.Errorf("sending verify-deeplink: %w", err)
}
return decodeJSON[*tka.DeeplinkValidationResult](body)
}
// NetworkLockGenRecoveryAUM generates an AUM for recovering from a tailnet-lock key compromise.
func (lc *Client) NetworkLockGenRecoveryAUM(ctx context.Context, removeKeys []tkatype.KeyID, forkFrom tka.AUMHash) ([]byte, error) {
vr := struct {
Keys []tkatype.KeyID
ForkFrom string
}{removeKeys, forkFrom.String()}
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/generate-recovery-aum", 200, jsonBody(vr))
if err != nil {
return nil, fmt.Errorf("sending generate-recovery-aum: %w", err)
}
return body, nil
}
// NetworkLockCosignRecoveryAUM co-signs a recovery AUM using the node's tailnet lock key.
func (lc *Client) NetworkLockCosignRecoveryAUM(ctx context.Context, aum tka.AUM) ([]byte, error) {
r := bytes.NewReader(aum.Serialize())
body, err := lc.send(ctx, "POST", "/localapi/v0/tka/cosign-recovery-aum", 200, r)
if err != nil {
return nil, fmt.Errorf("sending cosign-recovery-aum: %w", err)
}
return body, nil
}
// NetworkLockSubmitRecoveryAUM submits a recovery AUM to the control plane.
func (lc *Client) NetworkLockSubmitRecoveryAUM(ctx context.Context, aum tka.AUM) error {
r := bytes.NewReader(aum.Serialize())
_, err := lc.send(ctx, "POST", "/localapi/v0/tka/submit-recovery-aum", 200, r)
if err != nil {
return fmt.Errorf("sending cosign-recovery-aum: %w", err)
}
return nil
}
// NetworkLockDisable shuts down network-lock across the tailnet.
func (lc *Client) NetworkLockDisable(ctx context.Context, secret []byte) error {
if _, err := lc.send(ctx, "POST", "/localapi/v0/tka/disable", 200, bytes.NewReader(secret)); err != nil {
return fmt.Errorf("error: %w", err)
}
return nil
}

@ -1,327 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
package systray
import (
"bytes"
"context"
"image"
"image/color"
"image/png"
"runtime"
"sync"
"time"
"fyne.io/systray"
ico "github.com/Kodeworks/golang-image-ico"
"github.com/fogleman/gg"
)
// tsLogo represents the Tailscale logo displayed as the systray icon.
type tsLogo struct {
// dots represents the state of the 3x3 dot grid in the logo.
// A 0 represents a gray dot, any other value is a white dot.
dots [9]byte
// dotMask returns an image mask to be used when rendering the logo dots.
dotMask func(dc *gg.Context, borderUnits int, radius int) *image.Alpha
// overlay is called after the dots are rendered to draw an additional overlay.
overlay func(dc *gg.Context, borderUnits int, radius int)
}
var (
// disconnected is all gray dots
disconnected = tsLogo{dots: [9]byte{
0, 0, 0,
0, 0, 0,
0, 0, 0,
}}
// connected is the normal Tailscale logo
connected = tsLogo{dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
}}
// loading is a special tsLogo value that is not meant to be rendered directly,
// but indicates that the loading animation should be shown.
loading = tsLogo{dots: [9]byte{'l', 'o', 'a', 'd', 'i', 'n', 'g'}}
// loadingIcons are shown in sequence as an animated loading icon.
loadingLogos = []tsLogo{
{dots: [9]byte{
0, 1, 1,
1, 0, 1,
0, 0, 1,
}},
{dots: [9]byte{
0, 1, 1,
0, 0, 1,
0, 1, 0,
}},
{dots: [9]byte{
0, 1, 1,
0, 0, 0,
0, 0, 1,
}},
{dots: [9]byte{
0, 0, 1,
0, 1, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 1, 0,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 1,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 1,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
1, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
1, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 0, 0,
1, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 0,
0, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 0,
0, 1, 1,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 0, 1,
}},
{dots: [9]byte{
0, 1, 0,
0, 1, 1,
1, 0, 1,
}},
}
// exitNodeOnline is the Tailscale logo with an additional arrow overlay in the corner.
exitNodeOnline = tsLogo{
dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
},
// draw an arrow mask in the bottom right corner with a reasonably thick line width.
dotMask: func(dc *gg.Context, borderUnits int, radius int) *image.Alpha {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 3.5)
y := r * (bu + 7)
x2 := x1 + (r * 5)
mc := gg.NewContext(dc.Width(), dc.Height())
mc.DrawLine(x1, y, x2, y) // arrow center line
mc.DrawLine(x2-(1.5*r), y-(1.5*r), x2, y) // top of arrow tip
mc.DrawLine(x2-(1.5*r), y+(1.5*r), x2, y) // bottom of arrow tip
mc.SetLineWidth(r * 3)
mc.Stroke()
return mc.AsMask()
},
// draw an arrow in the bottom right corner over the masked area.
overlay: func(dc *gg.Context, borderUnits int, radius int) {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 3.5)
y := r * (bu + 7)
x2 := x1 + (r * 5)
dc.DrawLine(x1, y, x2, y) // arrow center line
dc.DrawLine(x2-(1.5*r), y-(1.5*r), x2, y) // top of arrow tip
dc.DrawLine(x2-(1.5*r), y+(1.5*r), x2, y) // bottom of arrow tip
dc.SetColor(fg)
dc.SetLineWidth(r)
dc.Stroke()
},
}
// exitNodeOffline is the Tailscale logo with a red "x" in the corner.
exitNodeOffline = tsLogo{
dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
},
// Draw a square that hides the four dots in the bottom right corner,
dotMask: func(dc *gg.Context, borderUnits int, radius int) *image.Alpha {
bu, r := float64(borderUnits), float64(radius)
x := r * (bu + 3)
mc := gg.NewContext(dc.Width(), dc.Height())
mc.DrawRectangle(x, x, r*6, r*6)
mc.Fill()
return mc.AsMask()
},
// draw a red "x" over the bottom right corner.
overlay: func(dc *gg.Context, borderUnits int, radius int) {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 4)
x2 := x1 + (r * 3.5)
dc.DrawLine(x1, x1, x2, x2) // top-left to bottom-right stroke
dc.DrawLine(x1, x2, x2, x1) // bottom-left to top-right stroke
dc.SetColor(red)
dc.SetLineWidth(r)
dc.Stroke()
},
}
)
var (
bg = color.NRGBA{0, 0, 0, 255}
fg = color.NRGBA{255, 255, 255, 255}
gray = color.NRGBA{255, 255, 255, 102}
red = color.NRGBA{229, 111, 74, 255}
)
// render returns a PNG image of the logo.
func (logo tsLogo) render() *bytes.Buffer {
const borderUnits = 1
return logo.renderWithBorder(borderUnits)
}
// renderWithBorder returns a PNG image of the logo with the specified border width.
// One border unit is equal to the radius of a tailscale logo dot.
func (logo tsLogo) renderWithBorder(borderUnits int) *bytes.Buffer {
const radius = 25
dim := radius * (8 + borderUnits*2)
dc := gg.NewContext(dim, dim)
dc.DrawRectangle(0, 0, float64(dim), float64(dim))
dc.SetColor(bg)
dc.Fill()
if logo.dotMask != nil {
mask := logo.dotMask(dc, borderUnits, radius)
dc.SetMask(mask)
dc.InvertMask()
}
for y := 0; y < 3; y++ {
for x := 0; x < 3; x++ {
px := (borderUnits + 1 + 3*x) * radius
py := (borderUnits + 1 + 3*y) * radius
col := fg
if logo.dots[y*3+x] == 0 {
col = gray
}
dc.DrawCircle(float64(px), float64(py), radius)
dc.SetColor(col)
dc.Fill()
}
}
if logo.overlay != nil {
dc.ResetClip()
logo.overlay(dc, borderUnits, radius)
}
b := bytes.NewBuffer(nil)
// Encode as ICO format on Windows, PNG on all other platforms.
if runtime.GOOS == "windows" {
_ = ico.Encode(b, dc.Image())
} else {
_ = png.Encode(b, dc.Image())
}
return b
}
// setAppIcon renders logo and sets it as the systray icon.
func setAppIcon(icon tsLogo) {
if icon.dots == loading.dots {
startLoadingAnimation()
} else {
stopLoadingAnimation()
systray.SetIcon(icon.render().Bytes())
}
}
var (
loadingMu sync.Mutex // protects loadingCancel
// loadingCancel stops the loading animation in the systray icon.
// This is nil if the animation is not currently active.
loadingCancel func()
)
// startLoadingAnimation starts the animated loading icon in the system tray.
// The animation continues until [stopLoadingAnimation] is called.
// If the loading animation is already active, this func does nothing.
func startLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
// loading icon already displayed
return
}
ctx := context.Background()
ctx, loadingCancel = context.WithCancel(ctx)
go func() {
t := time.NewTicker(500 * time.Millisecond)
var i int
for {
select {
case <-ctx.Done():
return
case <-t.C:
systray.SetIcon(loadingLogos[i].render().Bytes())
i++
if i >= len(loadingLogos) {
i = 0
}
}
}
}()
}
// stopLoadingAnimation stops the animated loading icon in the system tray.
// If the loading animation is not currently active, this func does nothing.
func stopLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
loadingCancel()
loadingCancel = nil
}
}

@ -1,76 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
// Package systray provides a minimal Tailscale systray application.
package systray
import (
"bufio"
"bytes"
_ "embed"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
)
//go:embed tailscale-systray.service
var embedSystemd string
func InstallStartupScript(initSystem string) error {
switch initSystem {
case "systemd":
return installSystemd()
default:
return fmt.Errorf("unsupported init system '%s'", initSystem)
}
}
func installSystemd() error {
// Find the path to tailscale, just in case it's not where the example file
// has it placed, and replace that before writing the file.
tailscaleBin, err := exec.LookPath("tailscale")
if err != nil {
return fmt.Errorf("failed to find tailscale binary %w", err)
}
var output bytes.Buffer
scanner := bufio.NewScanner(strings.NewReader(embedSystemd))
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "ExecStart=") {
line = fmt.Sprintf("ExecStart=%s systray", tailscaleBin)
}
output.WriteString(line + "\n")
}
configDir, err := os.UserConfigDir()
if err != nil {
homeDir, err := os.UserHomeDir()
if err != nil {
return fmt.Errorf("unable to locate user home: %w", err)
}
configDir = filepath.Join(homeDir, ".config")
}
systemdDir := filepath.Join(configDir, "systemd", "user")
if err := os.MkdirAll(systemdDir, 0o755); err != nil {
return fmt.Errorf("failed creating systemd uuser dir: %w", err)
}
serviceFile := filepath.Join(systemdDir, "tailscale-systray.service")
if err := os.WriteFile(serviceFile, output.Bytes(), 0o755); err != nil {
return fmt.Errorf("failed writing systemd user service: %w", err)
}
fmt.Printf("Successfully installed systemd service to: %s\n", serviceFile)
fmt.Println("To enable and start the service, run:")
fmt.Println(" systemctl --user daemon-reload")
fmt.Println(" systemctl --user enable --now tailscale-systray")
return nil
}

@ -1,801 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
// Package systray provides a minimal Tailscale systray application.
package systray
import (
"bytes"
"context"
"errors"
"fmt"
"image"
"io"
"log"
"net/http"
"os"
"os/signal"
"runtime"
"slices"
"strings"
"sync"
"syscall"
"time"
"fyne.io/systray"
ico "github.com/Kodeworks/golang-image-ico"
"github.com/atotto/clipboard"
dbus "github.com/godbus/dbus/v5"
"github.com/toqueteos/webbrowser"
"tailscale.com/client/local"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
"tailscale.com/util/slicesx"
"tailscale.com/util/stringsx"
)
var (
// newMenuDelay is the amount of time to sleep after creating a new menu,
// but before adding items to it. This works around a bug in some dbus implementations.
newMenuDelay time.Duration
// if true, treat all mullvad exit node countries as single-city.
// Instead of rendering a submenu with cities, just select the highest-priority peer.
hideMullvadCities bool
)
// Run starts the systray menu and blocks until the menu exits.
// If client is nil, a default local.Client is used.
func (menu *Menu) Run(client *local.Client) {
if client == nil {
client = &local.Client{}
}
menu.lc = client
menu.updateState()
// exit cleanly on SIGINT and SIGTERM
go func() {
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, syscall.SIGINT, syscall.SIGTERM)
select {
case <-interrupt:
menu.onExit()
case <-menu.bgCtx.Done():
}
}()
go menu.lc.SetGauge(menu.bgCtx, "systray_running", 1)
defer menu.lc.SetGauge(menu.bgCtx, "systray_running", 0)
systray.Run(menu.onReady, menu.onExit)
}
// Menu represents the systray menu, its items, and the current Tailscale state.
type Menu struct {
mu sync.Mutex // protects the entire Menu
lc *local.Client
status *ipnstate.Status
curProfile ipn.LoginProfile
allProfiles []ipn.LoginProfile
// readonly is whether the systray app is running in read-only mode.
// This is set if LocalAPI returns a permission error,
// typically because the user needs to run `tailscale set --operator=$USER`.
readonly bool
bgCtx context.Context // ctx for background tasks not involving menu item clicks
bgCancel context.CancelFunc
// Top-level menu items
connect *systray.MenuItem
disconnect *systray.MenuItem
self *systray.MenuItem
exitNodes *systray.MenuItem
more *systray.MenuItem
rebuildMenu *systray.MenuItem
quit *systray.MenuItem
rebuildCh chan struct{} // triggers a menu rebuild
accountsCh chan ipn.ProfileID
exitNodeCh chan tailcfg.StableNodeID // ID of selected exit node
eventCancel context.CancelFunc // cancel eventLoop
notificationIcon *os.File // icon used for desktop notifications
}
func (menu *Menu) init() {
if menu.bgCtx != nil {
// already initialized
return
}
menu.rebuildCh = make(chan struct{}, 1)
menu.accountsCh = make(chan ipn.ProfileID)
menu.exitNodeCh = make(chan tailcfg.StableNodeID)
// dbus wants a file path for notification icons, so copy to a temp file.
menu.notificationIcon, _ = os.CreateTemp("", "tailscale-systray.png")
io.Copy(menu.notificationIcon, connected.renderWithBorder(3))
menu.bgCtx, menu.bgCancel = context.WithCancel(context.Background())
go menu.watchIPNBus()
}
func init() {
if runtime.GOOS != "linux" {
// so far, these tweaks are only needed on Linux
return
}
desktop := strings.ToLower(os.Getenv("XDG_CURRENT_DESKTOP"))
switch desktop {
case "gnome", "ubuntu:gnome":
// GNOME expands submenus downward in the main menu, rather than flyouts to the side.
// Either as a result of that or another limitation, there seems to be a maximum depth of submenus.
// Mullvad countries that have a city submenu are not being rendered, and so can't be selected.
// Handle this by simply treating all mullvad countries as single-city and select the best peer.
hideMullvadCities = true
case "kde":
// KDE doesn't need a delay, and actually won't render submenus
// if we delay for more than about 400µs.
newMenuDelay = 0
default:
// Add a slight delay to ensure the menu is created before adding items.
//
// Systray implementations that use libdbusmenu sometimes process messages out of order,
// resulting in errors such as:
// (waybar:153009): LIBDBUSMENU-GTK-WARNING **: 18:07:11.551: Children but no menu, someone's been naughty with their 'children-display' property: 'submenu'
//
// See also: https://github.com/fyne-io/systray/issues/12
newMenuDelay = 10 * time.Millisecond
}
}
// onReady is called by the systray package when the menu is ready to be built.
func (menu *Menu) onReady() {
log.Printf("starting")
if os.Getuid() == 0 || os.Getuid() != os.Geteuid() || os.Getenv("SUDO_USER") != "" || os.Getenv("DOAS_USER") != "" {
fmt.Fprintln(os.Stderr, `
It appears that you might be running the systray with sudo/doas.
This can lead to issues with D-Bus, and should be avoided.
The systray application should be run with the same user as your desktop session.
This usually means that you should run the application like:
tailscale systray
See https://tailscale.com/kb/1597/linux-systray for more information.`)
}
setAppIcon(disconnected)
menu.rebuild()
menu.mu.Lock()
if menu.readonly {
fmt.Fprintln(os.Stderr, `
No permission to manage Tailscale. Set operator by running:
sudo tailscale set --operator=$USER
See https://tailscale.com/s/cli-operator for more information.`)
}
menu.mu.Unlock()
}
// updateState updates the Menu state from the Tailscale local client.
func (menu *Menu) updateState() {
menu.mu.Lock()
defer menu.mu.Unlock()
menu.init()
menu.readonly = false
var err error
menu.status, err = menu.lc.Status(menu.bgCtx)
if err != nil {
log.Print(err)
}
menu.curProfile, menu.allProfiles, err = menu.lc.ProfileStatus(menu.bgCtx)
if err != nil {
if local.IsAccessDeniedError(err) {
menu.readonly = true
}
log.Print(err)
}
}
// rebuild the systray menu based on the current Tailscale state.
//
// We currently rebuild the entire menu because it is not easy to update the existing menu.
// You cannot iterate over the items in a menu, nor can you remove some items like separators.
// So for now we rebuild the whole thing, and can optimize this later if needed.
func (menu *Menu) rebuild() {
menu.mu.Lock()
defer menu.mu.Unlock()
menu.init()
if menu.eventCancel != nil {
menu.eventCancel()
}
ctx := context.Background()
ctx, menu.eventCancel = context.WithCancel(ctx)
systray.ResetMenu()
if menu.readonly {
const readonlyMsg = "No permission to manage Tailscale.\nSee tailscale.com/s/cli-operator"
m := systray.AddMenuItem(readonlyMsg, "")
onClick(ctx, m, func(_ context.Context) {
webbrowser.Open("https://tailscale.com/s/cli-operator")
})
systray.AddSeparator()
}
menu.connect = systray.AddMenuItem("Connect", "")
menu.disconnect = systray.AddMenuItem("Disconnect", "")
menu.disconnect.Hide()
systray.AddSeparator()
// delay to prevent race setting icon on first start
time.Sleep(newMenuDelay)
// Set systray menu icon and title.
// Also adjust connect/disconnect menu items if needed.
var backendState string
if menu.status != nil {
backendState = menu.status.BackendState
}
switch backendState {
case ipn.Running.String():
if menu.status.ExitNodeStatus != nil && !menu.status.ExitNodeStatus.ID.IsZero() {
if menu.status.ExitNodeStatus.Online {
setTooltip("Using exit node")
setAppIcon(exitNodeOnline)
} else {
setTooltip("Exit node offline")
setAppIcon(exitNodeOffline)
}
} else {
setTooltip(fmt.Sprintf("Connected to %s", menu.status.CurrentTailnet.Name))
setAppIcon(connected)
}
menu.connect.SetTitle("Connected")
menu.connect.Disable()
menu.disconnect.Show()
menu.disconnect.Enable()
case ipn.Starting.String():
setTooltip("Connecting")
setAppIcon(loading)
default:
setTooltip("Disconnected")
setAppIcon(disconnected)
}
if menu.readonly {
menu.connect.Disable()
menu.disconnect.Disable()
}
account := "Account"
if pt := profileTitle(menu.curProfile); pt != "" {
account = pt
}
if !menu.readonly {
accounts := systray.AddMenuItem(account, "")
setRemoteIcon(accounts, menu.curProfile.UserProfile.ProfilePicURL)
time.Sleep(newMenuDelay)
for _, profile := range menu.allProfiles {
title := profileTitle(profile)
var item *systray.MenuItem
if profile.ID == menu.curProfile.ID {
item = accounts.AddSubMenuItemCheckbox(title, "", true)
} else {
item = accounts.AddSubMenuItem(title, "")
}
setRemoteIcon(item, profile.UserProfile.ProfilePicURL)
onClick(ctx, item, func(ctx context.Context) {
select {
case <-ctx.Done():
case menu.accountsCh <- profile.ID:
}
})
}
}
if menu.status != nil && menu.status.Self != nil && len(menu.status.Self.TailscaleIPs) > 0 {
title := fmt.Sprintf("This Device: %s (%s)", menu.status.Self.HostName, menu.status.Self.TailscaleIPs[0])
menu.self = systray.AddMenuItem(title, "")
} else {
menu.self = systray.AddMenuItem("This Device: not connected", "")
menu.self.Disable()
}
systray.AddSeparator()
if !menu.readonly {
menu.rebuildExitNodeMenu(ctx)
}
menu.more = systray.AddMenuItem("More settings", "")
if menu.status != nil && menu.status.BackendState == "Running" {
// web client is only available if backend is running
onClick(ctx, menu.more, func(_ context.Context) {
webbrowser.Open("http://100.100.100.100/")
})
} else {
menu.more.Disable()
}
// TODO(#15528): this menu item shouldn't be necessary at all,
// but is at least more discoverable than having users switch profiles or exit nodes.
menu.rebuildMenu = systray.AddMenuItem("Rebuild menu", "Fix missing menu items")
onClick(ctx, menu.rebuildMenu, func(ctx context.Context) {
select {
case <-ctx.Done():
case menu.rebuildCh <- struct{}{}:
}
})
menu.rebuildMenu.Enable()
menu.quit = systray.AddMenuItem("Quit", "Quit the app")
menu.quit.Enable()
go menu.eventLoop(ctx)
}
// profileTitle returns the title string for a profile menu item.
func profileTitle(profile ipn.LoginProfile) string {
title := profile.Name
if profile.NetworkProfile.DomainName != "" {
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
// windows and mac don't support multi-line menu
title += " (" + profile.NetworkProfile.DisplayNameOrDefault() + ")"
} else {
title += "\n" + profile.NetworkProfile.DisplayNameOrDefault()
}
}
return title
}
var (
cacheMu sync.Mutex
httpCache = map[string][]byte{} // URL => response body
)
// setRemoteIcon sets the icon for menu to the specified remote image.
// Remote images are fetched as needed and cached.
func setRemoteIcon(menu *systray.MenuItem, urlStr string) {
if menu == nil || urlStr == "" {
return
}
cacheMu.Lock()
defer cacheMu.Unlock()
b, ok := httpCache[urlStr]
if !ok {
resp, err := http.Get(urlStr)
if err == nil && resp.StatusCode == http.StatusOK {
b, _ = io.ReadAll(resp.Body)
// Convert image to ICO format on Windows
if runtime.GOOS == "windows" {
im, _, err := image.Decode(bytes.NewReader(b))
if err != nil {
return
}
buf := bytes.NewBuffer(nil)
if err := ico.Encode(buf, im); err != nil {
return
}
b = buf.Bytes()
}
httpCache[urlStr] = b
resp.Body.Close()
}
}
if len(b) > 0 {
menu.SetIcon(b)
}
}
// setTooltip sets the tooltip text for the systray icon.
func setTooltip(text string) {
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
systray.SetTooltip(text)
} else {
// on Linux, SetTitle actually sets the tooltip
systray.SetTitle(text)
}
}
// eventLoop is the main event loop for handling click events on menu items
// and responding to Tailscale state changes.
// This method does not return until ctx.Done is closed.
func (menu *Menu) eventLoop(ctx context.Context) {
for {
select {
case <-ctx.Done():
return
case <-menu.rebuildCh:
menu.updateState()
menu.rebuild()
case <-menu.connect.ClickedCh:
_, err := menu.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: true,
},
WantRunningSet: true,
})
if err != nil {
log.Printf("error connecting: %v", err)
}
case <-menu.disconnect.ClickedCh:
_, err := menu.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: false,
},
WantRunningSet: true,
})
if err != nil {
log.Printf("error disconnecting: %v", err)
}
case <-menu.self.ClickedCh:
menu.copyTailscaleIP(menu.status.Self)
case id := <-menu.accountsCh:
if err := menu.lc.SwitchProfile(ctx, id); err != nil {
log.Printf("error switching to profile ID %v: %v", id, err)
}
case exitNode := <-menu.exitNodeCh:
if exitNode.IsZero() {
log.Print("disable exit node")
if err := menu.lc.SetUseExitNode(ctx, false); err != nil {
log.Printf("error disabling exit node: %v", err)
}
} else {
log.Printf("enable exit node: %v", exitNode)
mp := &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
ExitNodeID: exitNode,
},
ExitNodeIDSet: true,
}
if _, err := menu.lc.EditPrefs(ctx, mp); err != nil {
log.Printf("error setting exit node: %v", err)
}
}
case <-menu.quit.ClickedCh:
systray.Quit()
}
}
}
// onClick registers a click handler for a menu item.
func onClick(ctx context.Context, item *systray.MenuItem, fn func(ctx context.Context)) {
go func() {
for {
select {
case <-ctx.Done():
return
case <-item.ClickedCh:
fn(ctx)
}
}
}()
}
// watchIPNBus subscribes to the tailscale event bus and sends state updates to chState.
// This method does not return.
func (menu *Menu) watchIPNBus() {
for {
if err := menu.watchIPNBusInner(); err != nil {
log.Println(err)
if errors.Is(err, context.Canceled) {
// If the context got canceled, we will never be able to
// reconnect to IPN bus, so exit the process.
log.Fatalf("watchIPNBus: %v", err)
}
}
// If our watch connection breaks, wait a bit before reconnecting. No
// reason to spam the logs if e.g. tailscaled is restarting or goes
// down.
time.Sleep(3 * time.Second)
}
}
func (menu *Menu) watchIPNBusInner() error {
watcher, err := menu.lc.WatchIPNBus(menu.bgCtx, 0)
if err != nil {
return fmt.Errorf("watching ipn bus: %w", err)
}
defer watcher.Close()
for {
select {
case <-menu.bgCtx.Done():
return nil
default:
n, err := watcher.Next()
if err != nil {
return fmt.Errorf("ipnbus error: %w", err)
}
var rebuild bool
if n.State != nil {
log.Printf("new state: %v", n.State)
rebuild = true
}
if n.Prefs != nil {
rebuild = true
}
if rebuild {
menu.rebuildCh <- struct{}{}
}
}
}
}
// copyTailscaleIP copies the first Tailscale IP of the given device to the clipboard
// and sends a notification with the copied value.
func (menu *Menu) copyTailscaleIP(device *ipnstate.PeerStatus) {
if device == nil || len(device.TailscaleIPs) == 0 {
return
}
name := strings.Split(device.DNSName, ".")[0]
ip := device.TailscaleIPs[0].String()
err := clipboard.WriteAll(ip)
if err != nil {
log.Printf("clipboard error: %v", err)
} else {
menu.sendNotification(fmt.Sprintf("Copied Address for %v", name), ip)
}
}
// sendNotification sends a desktop notification with the given title and content.
func (menu *Menu) sendNotification(title, content string) {
conn, err := dbus.SessionBus()
if err != nil {
log.Printf("dbus: %v", err)
return
}
timeout := 3 * time.Second
obj := conn.Object("org.freedesktop.Notifications", "/org/freedesktop/Notifications")
call := obj.Call("org.freedesktop.Notifications.Notify", 0, "Tailscale", uint32(0),
menu.notificationIcon.Name(), title, content, []string{}, map[string]dbus.Variant{}, int32(timeout.Milliseconds()))
if call.Err != nil {
log.Printf("dbus: %v", call.Err)
}
}
func (menu *Menu) rebuildExitNodeMenu(ctx context.Context) {
if menu.status == nil {
return
}
status := menu.status
menu.exitNodes = systray.AddMenuItem("Exit Nodes", "")
time.Sleep(newMenuDelay)
// register a click handler for a menu item to set nodeID as the exit node.
setExitNodeOnClick := func(item *systray.MenuItem, nodeID tailcfg.StableNodeID) {
onClick(ctx, item, func(ctx context.Context) {
select {
case <-ctx.Done():
case menu.exitNodeCh <- nodeID:
}
})
}
noExitNodeMenu := menu.exitNodes.AddSubMenuItemCheckbox("None", "", status.ExitNodeStatus == nil)
setExitNodeOnClick(noExitNodeMenu, "")
// Show recommended exit node if available.
if status.Self.CapMap.Contains(tailcfg.NodeAttrSuggestExitNodeUI) {
sugg, err := menu.lc.SuggestExitNode(ctx)
if err == nil {
title := "Recommended: "
if loc := sugg.Location; loc.Valid() && loc.Country() != "" {
flag := countryFlag(loc.CountryCode())
title += fmt.Sprintf("%s %s: %s", flag, loc.Country(), loc.City())
} else {
title += strings.Split(sugg.Name, ".")[0]
}
menu.exitNodes.AddSeparator()
rm := menu.exitNodes.AddSubMenuItemCheckbox(title, "", false)
setExitNodeOnClick(rm, sugg.ID)
if status.ExitNodeStatus != nil && sugg.ID == status.ExitNodeStatus.ID {
rm.Check()
}
}
}
// Add tailnet exit nodes if present.
var tailnetExitNodes []*ipnstate.PeerStatus
for _, ps := range status.Peer {
if ps.ExitNodeOption && ps.Location == nil {
tailnetExitNodes = append(tailnetExitNodes, ps)
}
}
if len(tailnetExitNodes) > 0 {
menu.exitNodes.AddSeparator()
menu.exitNodes.AddSubMenuItem("Tailnet Exit Nodes", "").Disable()
for _, ps := range status.Peer {
if !ps.ExitNodeOption || ps.Location != nil {
continue
}
name := strings.Split(ps.DNSName, ".")[0]
if !ps.Online {
name += " (offline)"
}
sm := menu.exitNodes.AddSubMenuItemCheckbox(name, "", false)
if !ps.Online {
sm.Disable()
}
if status.ExitNodeStatus != nil && ps.ID == status.ExitNodeStatus.ID {
sm.Check()
}
setExitNodeOnClick(sm, ps.ID)
}
}
// Add mullvad exit nodes if present.
var mullvadExitNodes mullvadPeers
if status.Self.CapMap.Contains("mullvad") {
mullvadExitNodes = newMullvadPeers(status)
}
if len(mullvadExitNodes.countries) > 0 {
menu.exitNodes.AddSeparator()
menu.exitNodes.AddSubMenuItem("Location-based Exit Nodes", "").Disable()
mullvadMenu := menu.exitNodes.AddSubMenuItemCheckbox("Mullvad VPN", "", false)
for _, country := range mullvadExitNodes.sortedCountries() {
flag := countryFlag(country.code)
countryMenu := mullvadMenu.AddSubMenuItemCheckbox(flag+" "+country.name, "", false)
// single-city country, no submenu
if len(country.cities) == 1 || hideMullvadCities {
setExitNodeOnClick(countryMenu, country.best.ID)
if status.ExitNodeStatus != nil {
for _, city := range country.cities {
for _, ps := range city.peers {
if status.ExitNodeStatus.ID == ps.ID {
mullvadMenu.Check()
countryMenu.Check()
}
}
}
}
continue
}
// multi-city country, build submenu with "best available" option and cities.
time.Sleep(newMenuDelay)
bm := countryMenu.AddSubMenuItemCheckbox("Best Available", "", false)
setExitNodeOnClick(bm, country.best.ID)
countryMenu.AddSeparator()
for _, city := range country.sortedCities() {
cityMenu := countryMenu.AddSubMenuItemCheckbox(city.name, "", false)
setExitNodeOnClick(cityMenu, city.best.ID)
if status.ExitNodeStatus != nil {
for _, ps := range city.peers {
if status.ExitNodeStatus.ID == ps.ID {
mullvadMenu.Check()
countryMenu.Check()
cityMenu.Check()
}
}
}
}
}
}
// TODO: "Allow Local Network Access" and "Run Exit Node" menu items
}
// mullvadPeers contains all mullvad peer nodes, sorted by country and city.
type mullvadPeers struct {
countries map[string]*mvCountry // country code (uppercase) => country
}
// sortedCountries returns countries containing mullvad nodes, sorted by name.
func (mp mullvadPeers) sortedCountries() []*mvCountry {
countries := slicesx.MapValues(mp.countries)
slices.SortFunc(countries, func(a, b *mvCountry) int {
return stringsx.CompareFold(a.name, b.name)
})
return countries
}
type mvCountry struct {
code string
name string
best *ipnstate.PeerStatus // highest priority peer in the country
cities map[string]*mvCity // city code => city
}
// sortedCities returns cities containing mullvad nodes, sorted by name.
func (mc *mvCountry) sortedCities() []*mvCity {
cities := slicesx.MapValues(mc.cities)
slices.SortFunc(cities, func(a, b *mvCity) int {
return stringsx.CompareFold(a.name, b.name)
})
return cities
}
// countryFlag takes a 2-character ASCII string and returns the corresponding emoji flag.
// It returns the empty string on error.
func countryFlag(code string) string {
if len(code) != 2 {
return ""
}
runes := make([]rune, 0, 2)
for i := range 2 {
b := code[i] | 32 // lowercase
if b < 'a' || b > 'z' {
return ""
}
// https://en.wikipedia.org/wiki/Regional_indicator_symbol
runes = append(runes, 0x1F1E6+rune(b-'a'))
}
return string(runes)
}
type mvCity struct {
name string
best *ipnstate.PeerStatus // highest priority peer in the city
peers []*ipnstate.PeerStatus
}
func newMullvadPeers(status *ipnstate.Status) mullvadPeers {
countries := make(map[string]*mvCountry)
for _, ps := range status.Peer {
if !ps.ExitNodeOption || ps.Location == nil {
continue
}
loc := ps.Location
country, ok := countries[loc.CountryCode]
if !ok {
country = &mvCountry{
code: loc.CountryCode,
name: loc.Country,
cities: make(map[string]*mvCity),
}
countries[loc.CountryCode] = country
}
city, ok := countries[loc.CountryCode].cities[loc.CityCode]
if !ok {
city = &mvCity{
name: loc.City,
}
countries[loc.CountryCode].cities[loc.CityCode] = city
}
city.peers = append(city.peers, ps)
if city.best == nil || ps.Location.Priority > city.best.Location.Priority {
city.best = ps
}
if country.best == nil || ps.Location.Priority > country.best.Location.Priority {
country.best = ps
}
}
return mullvadPeers{countries}
}
// onExit is called by the systray package when the menu is exiting.
func (menu *Menu) onExit() {
log.Printf("exiting")
if menu.bgCancel != nil {
menu.bgCancel()
}
if menu.eventCancel != nil {
menu.eventCancel()
}
os.Remove(menu.notificationIcon.Name())
}

@ -1,10 +0,0 @@
[Unit]
Description=Tailscale System Tray
After=graphical.target
[Service]
Type=simple
ExecStart=/usr/bin/tailscale systray
[Install]
WantedBy=default.target

@ -12,7 +12,6 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"net/netip" "net/netip"
"net/url"
) )
// ACLRow defines a rule that grants access by a set of users or groups to a set // ACLRow defines a rule that grants access by a set of users or groups to a set
@ -84,7 +83,7 @@ func (c *Client) ACL(ctx context.Context) (acl *ACL, err error) {
} }
}() }()
path := c.BuildTailnetURL("acl") path := fmt.Sprintf("%s/api/v2/tailnet/%s/acl", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -98,7 +97,7 @@ func (c *Client) ACL(ctx context.Context) (acl *ACL, err error) {
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
// Otherwise, try to decode the response. // Otherwise, try to decode the response.
@ -127,7 +126,7 @@ func (c *Client) ACLHuJSON(ctx context.Context) (acl *ACLHuJSON, err error) {
} }
}() }()
path := c.BuildTailnetURL("acl", url.Values{"details": {"1"}}) path := fmt.Sprintf("%s/api/v2/tailnet/%s/acl?details=1", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -139,7 +138,7 @@ func (c *Client) ACLHuJSON(ctx context.Context) (acl *ACLHuJSON, err error) {
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
data := struct { data := struct {
@ -147,7 +146,7 @@ func (c *Client) ACLHuJSON(ctx context.Context) (acl *ACLHuJSON, err error) {
Warnings []string `json:"warnings"` Warnings []string `json:"warnings"`
}{} }{}
if err := json.Unmarshal(b, &data); err != nil { if err := json.Unmarshal(b, &data); err != nil {
return nil, fmt.Errorf("json.Unmarshal %q: %w", b, err) return nil, err
} }
acl = &ACLHuJSON{ acl = &ACLHuJSON{
@ -185,7 +184,7 @@ func (e ACLTestError) Error() string {
} }
func (c *Client) aclPOSTRequest(ctx context.Context, body []byte, avoidCollisions bool, etag, acceptHeader string) ([]byte, string, error) { func (c *Client) aclPOSTRequest(ctx context.Context, body []byte, avoidCollisions bool, etag, acceptHeader string) ([]byte, string, error) {
path := c.BuildTailnetURL("acl") path := fmt.Sprintf("%s/api/v2/tailnet/%s/acl", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(body)) req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(body))
if err != nil { if err != nil {
return nil, "", err return nil, "", err
@ -329,7 +328,7 @@ type ACLPreview struct {
} }
func (c *Client) previewACLPostRequest(ctx context.Context, body []byte, previewType string, previewFor string) (res *ACLPreviewResponse, err error) { func (c *Client) previewACLPostRequest(ctx context.Context, body []byte, previewType string, previewFor string) (res *ACLPreviewResponse, err error) {
path := c.BuildTailnetURL("acl", "preview") path := fmt.Sprintf("%s/api/v2/tailnet/%s/acl/preview", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(body)) req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(body))
if err != nil { if err != nil {
return nil, err return nil, err
@ -351,7 +350,7 @@ func (c *Client) previewACLPostRequest(ctx context.Context, body []byte, preview
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
if err = json.Unmarshal(b, &res); err != nil { if err = json.Unmarshal(b, &res); err != nil {
return nil, err return nil, err
@ -489,7 +488,7 @@ func (c *Client) ValidateACLJSON(ctx context.Context, source, dest string) (test
return nil, err return nil, err
} }
path := c.BuildTailnetURL("acl", "validate") path := fmt.Sprintf("%s/api/v2/tailnet/%s/acl/validate", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(postData)) req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewBuffer(postData))
if err != nil { if err != nil {
return nil, err return nil, err

@ -7,29 +7,11 @@ package apitype
import ( import (
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
"tailscale.com/util/ctxkey"
) )
// LocalAPIHost is the Host header value used by the LocalAPI. // LocalAPIHost is the Host header value used by the LocalAPI.
const LocalAPIHost = "local-tailscaled.sock" const LocalAPIHost = "local-tailscaled.sock"
// RequestReasonHeader is the header used to pass justification for a LocalAPI request,
// such as when a user wants to perform an action they don't have permission for,
// and a policy allows it with justification. As of 2025-01-29, it is only used to
// allow a user to disconnect Tailscale when the "always-on" mode is enabled.
//
// The header value is base64-encoded using the standard encoding defined in RFC 4648.
//
// See tailscale/corp#26146.
const RequestReasonHeader = "X-Tailscale-Reason"
// RequestReasonKey is the context key used to pass the request reason
// when making a LocalAPI request via [local.Client].
// It's value is a raw string. An empty string means no reason was provided.
//
// See tailscale/corp#26146.
var RequestReasonKey = ctxkey.New(RequestReasonHeader, "")
// WhoIsResponse is the JSON type returned by tailscaled debug server's /whois?ip=$IP handler. // WhoIsResponse is the JSON type returned by tailscaled debug server's /whois?ip=$IP handler.
// In successful whois responses, Node and UserProfile are never nil. // In successful whois responses, Node and UserProfile are never nil.
type WhoIsResponse struct { type WhoIsResponse struct {
@ -94,13 +76,3 @@ type DNSQueryResponse struct {
// Resolvers is the list of resolvers that the forwarder deemed able to resolve the query. // Resolvers is the list of resolvers that the forwarder deemed able to resolve the query.
Resolvers []*dnstype.Resolver Resolvers []*dnstype.Resolver
} }
// OptionalFeatures describes which optional features are enabled in the build.
type OptionalFeatures struct {
// Features is the map of optional feature names to whether they are
// enabled.
//
// Disabled features may be absent from the map. (That is, false values
// are not guaranteed to be present.)
Features map[string]bool
}

@ -3,50 +3,17 @@
package apitype package apitype
// DNSConfig is the DNS configuration for a tailnet
// used in /tailnet/{tailnet}/dns/config.
type DNSConfig struct { type DNSConfig struct {
// Resolvers are the global DNS resolvers to use Resolvers []DNSResolver `json:"resolvers"`
// overriding the local OS configuration. FallbackResolvers []DNSResolver `json:"fallbackResolvers"`
Resolvers []DNSResolver `json:"resolvers"` Routes map[string][]DNSResolver `json:"routes"`
Domains []string `json:"domains"`
// FallbackResolvers are used as global resolvers when Nameservers []string `json:"nameservers"`
// the client is unable to determine the OS's preferred DNS servers. Proxied bool `json:"proxied"`
FallbackResolvers []DNSResolver `json:"fallbackResolvers"` TempCorpIssue13969 string `json:"TempCorpIssue13969,omitempty"`
// Routes map DNS name suffixes to a set of DNS resolvers,
// used for Split DNS and other advanced routing overlays.
Routes map[string][]DNSResolver `json:"routes"`
// Domains are the search domains to use.
Domains []string `json:"domains"`
// Proxied means MagicDNS is enabled.
Proxied bool `json:"proxied"`
// TempCorpIssue13969 is from an internal hack day prototype,
// See tailscale/corp#13969.
TempCorpIssue13969 string `json:"TempCorpIssue13969,omitempty"`
// Nameservers are the IP addresses of global nameservers to use.
// This is a deprecated format but may still be found in tailnets
// that were configured a long time ago. When making updates,
// set Resolvers and leave Nameservers empty.
Nameservers []string `json:"nameservers"`
} }
// DNSResolver is a DNS resolver in a DNS configuration.
type DNSResolver struct { type DNSResolver struct {
// Addr is the address of the DNS resolver. Addr string `json:"addr"`
// It is usually an IP address or a DoH URL.
// See dnstype.Resolver.Addr for full details.
Addr string `json:"addr"`
// BootstrapResolution is an optional suggested resolution for
// the DoT/DoH resolver.
BootstrapResolution []string `json:"bootstrapResolution,omitempty"` BootstrapResolution []string `json:"bootstrapResolution,omitempty"`
// UseWithExitNode signals this resolver should be used
// even when a tailscale exit node is configured on a device.
UseWithExitNode bool `json:"useWithExitNode,omitempty"`
} }

@ -1,34 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !ts_omit_acme
package tailscale
import (
"context"
"crypto/tls"
"tailscale.com/client/local"
)
// GetCertificate is an alias for [tailscale.com/client/local.GetCertificate].
//
// Deprecated: import [tailscale.com/client/local] instead and use [local.Client.GetCertificate].
func GetCertificate(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
return local.GetCertificate(hi)
}
// CertPair is an alias for [tailscale.com/client/local.CertPair].
//
// Deprecated: import [tailscale.com/client/local] instead and use [local.Client.CertPair].
func CertPair(ctx context.Context, domain string) (certPEM, keyPEM []byte, err error) {
return local.CertPair(ctx, domain)
}
// ExpandSNIName is an alias for [tailscale.com/client/local.ExpandSNIName].
//
// Deprecated: import [tailscale.com/client/local] instead and use [local.Client.ExpandSNIName].
func ExpandSNIName(ctx context.Context, name string) (fqdn string, ok bool) {
return local.ExpandSNIName(ctx, name)
}

@ -79,13 +79,6 @@ type Device struct {
// Tailscale have attempted to collect this from the device but it has not // Tailscale have attempted to collect this from the device but it has not
// opted in, PostureIdentity will have Disabled=true. // opted in, PostureIdentity will have Disabled=true.
PostureIdentity *DevicePostureIdentity `json:"postureIdentity"` PostureIdentity *DevicePostureIdentity `json:"postureIdentity"`
// TailnetLockKey is the tailnet lock public key of the node as a hex string.
TailnetLockKey string `json:"tailnetLockKey,omitempty"`
// TailnetLockErr indicates an issue with the tailnet lock node-key signature
// on this device. This field is only populated when tailnet lock is enabled.
TailnetLockErr string `json:"tailnetLockError,omitempty"`
} }
type DevicePostureIdentity struct { type DevicePostureIdentity struct {
@ -138,7 +131,7 @@ func (c *Client) Devices(ctx context.Context, fields *DeviceFieldsOpts) (deviceL
} }
}() }()
path := c.BuildTailnetURL("devices") path := fmt.Sprintf("%s/api/v2/tailnet/%s/devices", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -156,7 +149,7 @@ func (c *Client) Devices(ctx context.Context, fields *DeviceFieldsOpts) (deviceL
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
var devices GetDevicesResponse var devices GetDevicesResponse
@ -195,7 +188,7 @@ func (c *Client) Device(ctx context.Context, deviceID string, fields *DeviceFiel
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
err = json.Unmarshal(b, &device) err = json.Unmarshal(b, &device)
@ -228,7 +221,7 @@ func (c *Client) DeleteDevice(ctx context.Context, deviceID string) (err error)
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return HandleErrorResponse(b, resp) return handleErrorResponse(b, resp)
} }
return nil return nil
} }
@ -260,7 +253,7 @@ func (c *Client) SetAuthorized(ctx context.Context, deviceID string, authorized
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return HandleErrorResponse(b, resp) return handleErrorResponse(b, resp)
} }
return nil return nil
@ -288,7 +281,7 @@ func (c *Client) SetTags(ctx context.Context, deviceID string, tags []string) er
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return HandleErrorResponse(b, resp) return handleErrorResponse(b, resp)
} }
return nil return nil

@ -44,7 +44,7 @@ type DNSPreferences struct {
} }
func (c *Client) dnsGETRequest(ctx context.Context, endpoint string) ([]byte, error) { func (c *Client) dnsGETRequest(ctx context.Context, endpoint string) ([]byte, error) {
path := c.BuildTailnetURL("dns", endpoint) path := fmt.Sprintf("%s/api/v2/tailnet/%s/dns/%s", c.baseURL(), c.tailnet, endpoint)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -57,14 +57,14 @@ func (c *Client) dnsGETRequest(ctx context.Context, endpoint string) ([]byte, er
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
return b, nil return b, nil
} }
func (c *Client) dnsPOSTRequest(ctx context.Context, endpoint string, postData any) ([]byte, error) { func (c *Client) dnsPOSTRequest(ctx context.Context, endpoint string, postData any) ([]byte, error) {
path := c.BuildTailnetURL("dns", endpoint) path := fmt.Sprintf("%s/api/v2/tailnet/%s/dns/%s", c.baseURL(), c.tailnet, endpoint)
data, err := json.Marshal(&postData) data, err := json.Marshal(&postData)
if err != nil { if err != nil {
return nil, err return nil, err
@ -84,7 +84,7 @@ func (c *Client) dnsPOSTRequest(ctx context.Context, endpoint string, postData a
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
return b, nil return b, nil

@ -11,14 +11,13 @@ import (
"log" "log"
"net/http" "net/http"
"tailscale.com/client/local" "tailscale.com/client/tailscale"
) )
func main() { func main() {
var lc local.Client
s := &http.Server{ s := &http.Server{
TLSConfig: &tls.Config{ TLSConfig: &tls.Config{
GetCertificate: lc.GetCertificate, GetCertificate: tailscale.GetCertificate,
}, },
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "<h1>Hello from Tailscale!</h1> It works.") io.WriteString(w, "<h1>Hello from Tailscale!</h1> It works.")

@ -40,7 +40,7 @@ type KeyDeviceCreateCapabilities struct {
// Keys returns the list of keys for the current user. // Keys returns the list of keys for the current user.
func (c *Client) Keys(ctx context.Context) ([]string, error) { func (c *Client) Keys(ctx context.Context) ([]string, error) {
path := c.BuildTailnetURL("keys") path := fmt.Sprintf("%s/api/v2/tailnet/%s/keys", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -51,7 +51,7 @@ func (c *Client) Keys(ctx context.Context) ([]string, error) {
return nil, err return nil, err
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
var keys struct { var keys struct {
@ -99,7 +99,7 @@ func (c *Client) CreateKeyWithExpiry(ctx context.Context, caps KeyCapabilities,
return "", nil, err return "", nil, err
} }
path := c.BuildTailnetURL("keys") path := fmt.Sprintf("%s/api/v2/tailnet/%s/keys", c.baseURL(), c.tailnet)
req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewReader(bs)) req, err := http.NewRequestWithContext(ctx, "POST", path, bytes.NewReader(bs))
if err != nil { if err != nil {
return "", nil, err return "", nil, err
@ -110,7 +110,7 @@ func (c *Client) CreateKeyWithExpiry(ctx context.Context, caps KeyCapabilities,
return "", nil, err return "", nil, err
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return "", nil, HandleErrorResponse(b, resp) return "", nil, handleErrorResponse(b, resp)
} }
var key struct { var key struct {
@ -126,7 +126,7 @@ func (c *Client) CreateKeyWithExpiry(ctx context.Context, caps KeyCapabilities,
// Key returns the metadata for the given key ID. Currently, capabilities are // Key returns the metadata for the given key ID. Currently, capabilities are
// only returned for auth keys, API keys only return general metadata. // only returned for auth keys, API keys only return general metadata.
func (c *Client) Key(ctx context.Context, id string) (*Key, error) { func (c *Client) Key(ctx context.Context, id string) (*Key, error) {
path := c.BuildTailnetURL("keys", id) path := fmt.Sprintf("%s/api/v2/tailnet/%s/keys/%s", c.baseURL(), c.tailnet, id)
req, err := http.NewRequestWithContext(ctx, "GET", path, nil) req, err := http.NewRequestWithContext(ctx, "GET", path, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -137,7 +137,7 @@ func (c *Client) Key(ctx context.Context, id string) (*Key, error) {
return nil, err return nil, err
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
var key Key var key Key
@ -149,7 +149,7 @@ func (c *Client) Key(ctx context.Context, id string) (*Key, error) {
// DeleteKey deletes the key with the given ID. // DeleteKey deletes the key with the given ID.
func (c *Client) DeleteKey(ctx context.Context, id string) error { func (c *Client) DeleteKey(ctx context.Context, id string) error {
path := c.BuildTailnetURL("keys", id) path := fmt.Sprintf("%s/api/v2/tailnet/%s/keys/%s", c.baseURL(), c.tailnet, id)
req, err := http.NewRequestWithContext(ctx, "DELETE", path, nil) req, err := http.NewRequestWithContext(ctx, "DELETE", path, nil)
if err != nil { if err != nil {
return err return err
@ -160,7 +160,7 @@ func (c *Client) DeleteKey(ctx context.Context, id string) error {
return err return err
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return HandleErrorResponse(b, resp) return handleErrorResponse(b, resp)
} }
return nil return nil
} }

File diff suppressed because it is too large Load Diff

@ -1,79 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package tailscale
import (
"context"
"tailscale.com/client/local"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/ipn/ipnstate"
)
// ErrPeerNotFound is an alias for [tailscale.com/client/local.ErrPeerNotFound].
//
// Deprecated: import [tailscale.com/client/local] instead.
var ErrPeerNotFound = local.ErrPeerNotFound
// LocalClient is an alias for [tailscale.com/client/local.Client].
//
// Deprecated: import [tailscale.com/client/local] instead.
type LocalClient = local.Client
// IPNBusWatcher is an alias for [tailscale.com/client/local.IPNBusWatcher].
//
// Deprecated: import [tailscale.com/client/local] instead.
type IPNBusWatcher = local.IPNBusWatcher
// BugReportOpts is an alias for [tailscale.com/client/local.BugReportOpts].
//
// Deprecated: import [tailscale.com/client/local] instead.
type BugReportOpts = local.BugReportOpts
// PingOpts is an alias for [tailscale.com/client/local.PingOpts].
//
// Deprecated: import [tailscale.com/client/local] instead.
type PingOpts = local.PingOpts
// SetVersionMismatchHandler is an alias for [tailscale.com/client/local.SetVersionMismatchHandler].
//
// Deprecated: import [tailscale.com/client/local] instead.
func SetVersionMismatchHandler(f func(clientVer, serverVer string)) {
local.SetVersionMismatchHandler(f)
}
// IsAccessDeniedError is an alias for [tailscale.com/client/local.IsAccessDeniedError].
//
// Deprecated: import [tailscale.com/client/local] instead.
func IsAccessDeniedError(err error) bool {
return local.IsAccessDeniedError(err)
}
// IsPreconditionsFailedError is an alias for [tailscale.com/client/local.IsPreconditionsFailedError].
//
// Deprecated: import [tailscale.com/client/local] instead.
func IsPreconditionsFailedError(err error) bool {
return local.IsPreconditionsFailedError(err)
}
// WhoIs is an alias for [tailscale.com/client/local.WhoIs].
//
// Deprecated: import [tailscale.com/client/local] instead and use [local.Client.WhoIs].
func WhoIs(ctx context.Context, remoteAddr string) (*apitype.WhoIsResponse, error) {
return local.WhoIs(ctx, remoteAddr)
}
// Status is an alias for [tailscale.com/client/local.Status].
//
// Deprecated: import [tailscale.com/client/local] instead.
func Status(ctx context.Context) (*ipnstate.Status, error) {
return local.Status(ctx)
}
// StatusWithoutPeers is an alias for [tailscale.com/client/local.StatusWithoutPeers].
//
// Deprecated: import [tailscale.com/client/local] instead.
func StatusWithoutPeers(ctx context.Context) (*ipnstate.Status, error) {
return local.StatusWithoutPeers(ctx)
}

@ -3,16 +3,16 @@
//go:build go1.19 //go:build go1.19
package local package tailscale
import ( import (
"context" "context"
"net" "net"
"net/http" "net/http"
"net/http/httptest"
"testing" "testing"
"tailscale.com/tstest/deptest" "tailscale.com/tstest/deptest"
"tailscale.com/tstest/nettest"
"tailscale.com/types/key" "tailscale.com/types/key"
) )
@ -36,15 +36,15 @@ func TestGetServeConfigFromJSON(t *testing.T) {
} }
func TestWhoIsPeerNotFound(t *testing.T) { func TestWhoIsPeerNotFound(t *testing.T) {
nw := nettest.GetNetwork(t) ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ts := nettest.NewHTTPServer(nw, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404) w.WriteHeader(404)
})) }))
defer ts.Close() defer ts.Close()
lc := &Client{ lc := &LocalClient{
Dial: func(ctx context.Context, network, addr string) (net.Conn, error) { Dial: func(ctx context.Context, network, addr string) (net.Conn, error) {
return nw.Dial(ctx, network, ts.Listener.Addr().String()) var std net.Dialer
return std.DialContext(ctx, network, ts.Listener.Addr().(*net.TCPAddr).String())
}, },
} }
var k key.NodePublic var k key.NodePublic

@ -44,7 +44,7 @@ func (c *Client) Routes(ctx context.Context, deviceID string) (routes *Routes, e
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
var sr Routes var sr Routes
@ -84,7 +84,7 @@ func (c *Client) SetRoutes(ctx context.Context, deviceID string, subnets []netip
// If status code was not successful, return the error. // If status code was not successful, return the error.
// TODO: Change the check for the StatusCode to include other 2XX success codes. // TODO: Change the check for the StatusCode to include other 2XX success codes.
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return nil, HandleErrorResponse(b, resp) return nil, handleErrorResponse(b, resp)
} }
var srr *Routes var srr *Routes

@ -9,6 +9,7 @@ import (
"context" "context"
"fmt" "fmt"
"net/http" "net/http"
"net/url"
"tailscale.com/util/httpm" "tailscale.com/util/httpm"
) )
@ -21,7 +22,7 @@ func (c *Client) TailnetDeleteRequest(ctx context.Context, tailnetID string) (er
} }
}() }()
path := c.BuildTailnetURL("tailnet") path := fmt.Sprintf("%s/api/v2/tailnet/%s", c.baseURL(), url.PathEscape(string(tailnetID)))
req, err := http.NewRequestWithContext(ctx, httpm.DELETE, path, nil) req, err := http.NewRequestWithContext(ctx, httpm.DELETE, path, nil)
if err != nil { if err != nil {
return err return err
@ -34,7 +35,7 @@ func (c *Client) TailnetDeleteRequest(ctx context.Context, tailnetID string) (er
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
return HandleErrorResponse(b, resp) return handleErrorResponse(b, resp)
} }
return nil return nil

@ -3,12 +3,11 @@
//go:build go1.19 //go:build go1.19
// Package tailscale contains a Go client for the Tailscale control plane API. // Package tailscale contains Go clients for the Tailscale LocalAPI and
// Tailscale control plane API.
// //
// This package is only intended for internal and transitional use. // Warning: this package is in development and makes no API compatibility
// // promises as of 2022-04-29. It is subject to change at any time.
// Deprecated: the official control plane client is available at
// [tailscale.com/client/tailscale/v2].
package tailscale package tailscale
import ( import (
@ -17,12 +16,13 @@ import (
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"net/url"
"path"
) )
// I_Acknowledge_This_API_Is_Unstable must be set true to use this package // I_Acknowledge_This_API_Is_Unstable must be set true to use this package
// for now. This package is being replaced by [tailscale.com/client/tailscale/v2]. // for now. It was added 2022-04-29 when it was moved to this git repo
// and will be removed when the public API has settled.
//
// TODO(bradfitz): remove this after the we're happy with the public API.
var I_Acknowledge_This_API_Is_Unstable = false var I_Acknowledge_This_API_Is_Unstable = false
// TODO: use url.PathEscape() for deviceID and tailnets when constructing requests. // TODO: use url.PathEscape() for deviceID and tailnets when constructing requests.
@ -34,10 +34,8 @@ const maxReadSize = 10 << 20
// Client makes API calls to the Tailscale control plane API server. // Client makes API calls to the Tailscale control plane API server.
// //
// Use [NewClient] to instantiate one. Exported fields should be set before // Use NewClient to instantiate one. Exported fields should be set before
// the client is used and not changed thereafter. // the client is used and not changed thereafter.
//
// Deprecated: use [tailscale.com/client/tailscale/v2] instead.
type Client struct { type Client struct {
// tailnet is the globally unique identifier for a Tailscale network, such // tailnet is the globally unique identifier for a Tailscale network, such
// as "example.com" or "user@gmail.com". // as "example.com" or "user@gmail.com".
@ -51,7 +49,7 @@ type Client struct {
BaseURL string BaseURL string
// HTTPClient optionally specifies an alternate HTTP client to use. // HTTPClient optionally specifies an alternate HTTP client to use.
// If nil, [http.DefaultClient] is used. // If nil, http.DefaultClient is used.
HTTPClient *http.Client HTTPClient *http.Client
// UserAgent optionally specifies an alternate User-Agent header // UserAgent optionally specifies an alternate User-Agent header
@ -65,46 +63,6 @@ func (c *Client) httpClient() *http.Client {
return http.DefaultClient return http.DefaultClient
} }
// BuildURL builds a url to http(s)://<apiserver>/api/v2/<slash-separated-pathElements>
// using the given pathElements. It url escapes each path element, so the
// caller doesn't need to worry about that. The last item of pathElements can
// be of type url.Values to add a query string to the URL.
//
// For example, BuildURL(devices, 5) with the default server URL would result in
// https://api.tailscale.com/api/v2/devices/5.
func (c *Client) BuildURL(pathElements ...any) string {
elem := make([]string, 1, len(pathElements)+1)
elem[0] = "/api/v2"
var query string
for i, pathElement := range pathElements {
if uv, ok := pathElement.(url.Values); ok && i == len(pathElements)-1 {
query = uv.Encode()
} else {
elem = append(elem, url.PathEscape(fmt.Sprint(pathElement)))
}
}
url := c.baseURL() + path.Join(elem...)
if query != "" {
url += "?" + query
}
return url
}
// BuildTailnetURL builds a url to http(s)://<apiserver>/api/v2/tailnet/<tailnet>/<slash-separated-pathElements>
// using the given pathElements. It url escapes each path element, so the
// caller doesn't need to worry about that. The last item of pathElements can
// be of type url.Values to add a query string to the URL.
//
// For example, BuildTailnetURL(policy, validate) with the default server URL and a tailnet of "example.com"
// would result in https://api.tailscale.com/api/v2/tailnet/example.com/policy/validate.
func (c *Client) BuildTailnetURL(pathElements ...any) string {
allElements := make([]any, 2, len(pathElements)+2)
allElements[0] = "tailnet"
allElements[1] = c.tailnet
allElements = append(allElements, pathElements...)
return c.BuildURL(allElements...)
}
func (c *Client) baseURL() string { func (c *Client) baseURL() string {
if c.BaseURL != "" { if c.BaseURL != "" {
return c.BaseURL return c.BaseURL
@ -119,7 +77,7 @@ type AuthMethod interface {
modifyRequest(req *http.Request) modifyRequest(req *http.Request)
} }
// APIKey is an [AuthMethod] for [NewClient] that authenticates requests // APIKey is an AuthMethod for NewClient that authenticates requests
// using an authkey. // using an authkey.
type APIKey string type APIKey string
@ -133,15 +91,13 @@ func (c *Client) setAuth(r *http.Request) {
} }
} }
// NewClient is a convenience method for instantiating a new [Client]. // NewClient is a convenience method for instantiating a new Client.
// //
// tailnet is the globally unique identifier for a Tailscale network, such // tailnet is the globally unique identifier for a Tailscale network, such
// as "example.com" or "user@gmail.com". // as "example.com" or "user@gmail.com".
// If httpClient is nil, then [http.DefaultClient] is used. // If httpClient is nil, then http.DefaultClient is used.
// "api.tailscale.com" is set as the BaseURL for the returned client // "api.tailscale.com" is set as the BaseURL for the returned client
// and can be changed manually by the user. // and can be changed manually by the user.
//
// Deprecated: use [tailscale.com/client/tailscale/v2] instead.
func NewClient(tailnet string, auth AuthMethod) *Client { func NewClient(tailnet string, auth AuthMethod) *Client {
return &Client{ return &Client{
tailnet: tailnet, tailnet: tailnet,
@ -192,14 +148,12 @@ func (e ErrResponse) Error() string {
return fmt.Sprintf("Status: %d, Message: %q", e.Status, e.Message) return fmt.Sprintf("Status: %d, Message: %q", e.Status, e.Message)
} }
// HandleErrorResponse decodes the error message from the server and returns // handleErrorResponse decodes the error message from the server and returns
// an [ErrResponse] from it. // an ErrResponse from it.
// func handleErrorResponse(b []byte, resp *http.Response) error {
// Deprecated: use [tailscale.com/client/tailscale/v2] instead.
func HandleErrorResponse(b []byte, resp *http.Response) error {
var errResp ErrResponse var errResp ErrResponse
if err := json.Unmarshal(b, &errResp); err != nil { if err := json.Unmarshal(b, &errResp); err != nil {
return fmt.Errorf("json.Unmarshal %q: %w", b, err) return err
} }
errResp.Status = resp.StatusCode errResp.Status = resp.StatusCode
return errResp return errResp

@ -1,86 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package tailscale
import (
"net/url"
"testing"
)
func TestClientBuildURL(t *testing.T) {
c := Client{BaseURL: "http://127.0.0.1:1234"}
for _, tt := range []struct {
desc string
elements []any
want string
}{
{
desc: "single-element",
elements: []any{"devices"},
want: "http://127.0.0.1:1234/api/v2/devices",
},
{
desc: "multiple-elements",
elements: []any{"tailnet", "example.com"},
want: "http://127.0.0.1:1234/api/v2/tailnet/example.com",
},
{
desc: "escape-element",
elements: []any{"tailnet", "example dot com?foo=bar"},
want: `http://127.0.0.1:1234/api/v2/tailnet/example%20dot%20com%3Ffoo=bar`,
},
{
desc: "url.Values",
elements: []any{"tailnet", "example.com", "acl", url.Values{"details": {"1"}}},
want: `http://127.0.0.1:1234/api/v2/tailnet/example.com/acl?details=1`,
},
} {
t.Run(tt.desc, func(t *testing.T) {
got := c.BuildURL(tt.elements...)
if got != tt.want {
t.Errorf("got %q, want %q", got, tt.want)
}
})
}
}
func TestClientBuildTailnetURL(t *testing.T) {
c := Client{
BaseURL: "http://127.0.0.1:1234",
tailnet: "example.com",
}
for _, tt := range []struct {
desc string
elements []any
want string
}{
{
desc: "single-element",
elements: []any{"devices"},
want: "http://127.0.0.1:1234/api/v2/tailnet/example.com/devices",
},
{
desc: "multiple-elements",
elements: []any{"devices", 123},
want: "http://127.0.0.1:1234/api/v2/tailnet/example.com/devices/123",
},
{
desc: "escape-element",
elements: []any{"foo bar?baz=qux"},
want: `http://127.0.0.1:1234/api/v2/tailnet/example.com/foo%20bar%3Fbaz=qux`,
},
{
desc: "url.Values",
elements: []any{"acl", url.Values{"details": {"1"}}},
want: `http://127.0.0.1:1234/api/v2/tailnet/example.com/acl?details=1`,
},
} {
t.Run(tt.desc, func(t *testing.T) {
got := c.BuildTailnetURL(tt.elements...)
if got != tt.want {
t.Errorf("got %q, want %q", got, tt.want)
}
})
}
}

@ -192,7 +192,7 @@ func (s *Server) controlSupportsCheckMode(ctx context.Context) bool {
if err != nil { if err != nil {
return true return true
} }
controlURL, err := url.Parse(prefs.ControlURLOrDefault(s.polc)) controlURL, err := url.Parse(prefs.ControlURLOrDefault())
if err != nil { if err != nil {
return true return true
} }

@ -3,7 +3,7 @@
"version": "0.0.1", "version": "0.0.1",
"license": "BSD-3-Clause", "license": "BSD-3-Clause",
"engines": { "engines": {
"node": "22.14.0", "node": "18.20.4",
"yarn": "1.22.19" "yarn": "1.22.19"
}, },
"type": "module", "type": "module",
@ -20,7 +20,7 @@
"zustand": "^4.4.7" "zustand": "^4.4.7"
}, },
"devDependencies": { "devDependencies": {
"@types/node": "^22.14.0", "@types/node": "^18.16.1",
"@types/react": "^18.0.20", "@types/react": "^18.0.20",
"@types/react-dom": "^18.0.6", "@types/react-dom": "^18.0.6",
"@vitejs/plugin-react-swc": "^3.6.0", "@vitejs/plugin-react-swc": "^3.6.0",
@ -34,10 +34,10 @@
"prettier-plugin-organize-imports": "^3.2.2", "prettier-plugin-organize-imports": "^3.2.2",
"tailwindcss": "^3.3.3", "tailwindcss": "^3.3.3",
"typescript": "^5.3.3", "typescript": "^5.3.3",
"vite": "^5.4.21", "vite": "^5.1.7",
"vite-plugin-svgr": "^4.2.0", "vite-plugin-svgr": "^4.2.0",
"vite-tsconfig-paths": "^3.5.0", "vite-tsconfig-paths": "^3.5.0",
"vitest": "^1.6.1" "vitest": "^1.3.1"
}, },
"resolutions": { "resolutions": {
"@typescript-eslint/eslint-plugin": "^6.2.1", "@typescript-eslint/eslint-plugin": "^6.2.1",

@ -249,6 +249,7 @@ export function useAPI() {
return api return api
} }
let csrfToken: string
let synoToken: string | undefined // required for synology API requests let synoToken: string | undefined // required for synology API requests
let unraidCsrfToken: string | undefined // required for unraid POST requests (#8062) let unraidCsrfToken: string | undefined // required for unraid POST requests (#8062)
@ -297,10 +298,12 @@ export function apiFetch<T>(
headers: { headers: {
Accept: "application/json", Accept: "application/json",
"Content-Type": contentType, "Content-Type": contentType,
"X-CSRF-Token": csrfToken,
}, },
body: body, body: body,
}) })
.then((r) => { .then((r) => {
updateCsrfToken(r)
if (!r.ok) { if (!r.ok) {
return r.text().then((err) => { return r.text().then((err) => {
throw new Error(err) throw new Error(err)
@ -319,6 +322,13 @@ export function apiFetch<T>(
}) })
} }
function updateCsrfToken(r: Response) {
const tok = r.headers.get("X-CSRF-Token")
if (tok) {
csrfToken = tok
}
}
export function setSynoToken(token?: string) { export function setSynoToken(token?: string) {
synoToken = token synoToken = token
} }

@ -1,11 +1,13 @@
// Copyright (c) Tailscale Inc & AUTHORS // Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause // SPDX-License-Identifier: BSD-3-Clause
import React from "react" import React, { useState } from "react"
import { useAPI } from "src/api" import { useAPI } from "src/api"
import TailscaleIcon from "src/assets/icons/tailscale-icon.svg?react" import TailscaleIcon from "src/assets/icons/tailscale-icon.svg?react"
import { NodeData } from "src/types" import { NodeData } from "src/types"
import Button from "src/ui/button" import Button from "src/ui/button"
import Collapsible from "src/ui/collapsible"
import Input from "src/ui/input"
/** /**
* LoginView is rendered when the client is not authenticated * LoginView is rendered when the client is not authenticated
@ -13,6 +15,8 @@ import Button from "src/ui/button"
*/ */
export default function LoginView({ data }: { data: NodeData }) { export default function LoginView({ data }: { data: NodeData }) {
const api = useAPI() const api = useAPI()
const [controlURL, setControlURL] = useState<string>("")
const [authKey, setAuthKey] = useState<string>("")
return ( return (
<div className="mb-8 py-6 px-8 bg-white rounded-md shadow-2xl"> <div className="mb-8 py-6 px-8 bg-white rounded-md shadow-2xl">
@ -84,6 +88,8 @@ export default function LoginView({ data }: { data: NodeData }) {
action: "up", action: "up",
data: { data: {
Reauthenticate: true, Reauthenticate: true,
ControlURL: controlURL,
AuthKey: authKey,
}, },
}) })
} }
@ -92,6 +98,34 @@ export default function LoginView({ data }: { data: NodeData }) {
> >
Log In Log In
</Button> </Button>
<Collapsible trigger="Advanced options">
<h4 className="font-medium mb-1 mt-2">Auth Key</h4>
<p className="text-sm text-gray-500">
Connect with a pre-authenticated key.{" "}
<a
href="https://tailscale.com/kb/1085/auth-keys/"
className="link"
target="_blank"
rel="noreferrer"
>
Learn more &rarr;
</a>
</p>
<Input
className="mt-2"
value={authKey}
onChange={(e) => setAuthKey(e.target.value)}
placeholder="tskey-auth-XXX"
/>
<h4 className="font-medium mt-3 mb-1">Server URL</h4>
<p className="text-sm text-gray-500">Base URL of control server.</p>
<Input
className="mt-2"
value={controlURL}
onChange={(e) => setControlURL(e.target.value)}
placeholder="https://login.tailscale.com/"
/>
</Collapsible>
</> </>
)} )}
</div> </div>

@ -66,7 +66,7 @@ export default function useExitNodes(node: NodeData, filter?: string) {
// match from a list of exit node `options` to `nodes`. // match from a list of exit node `options` to `nodes`.
const addBestMatchNode = ( const addBestMatchNode = (
options: ExitNode[], options: ExitNode[],
name: (loc: ExitNodeLocation) => string name: (l: ExitNodeLocation) => string
) => { ) => {
const bestNode = highestPriorityNode(options) const bestNode = highestPriorityNode(options)
if (!bestNode || !bestNode.Location) { if (!bestNode || !bestNode.Location) {
@ -86,7 +86,7 @@ export default function useExitNodes(node: NodeData, filter?: string) {
locationNodesMap.forEach( locationNodesMap.forEach(
// add one node per country // add one node per country
(countryNodes) => (countryNodes) =>
addBestMatchNode(flattenMap(countryNodes), (loc) => loc.Country) addBestMatchNode(flattenMap(countryNodes), (l) => l.Country)
) )
} else { } else {
// Otherwise, show the best match on a city-level, // Otherwise, show the best match on a city-level,
@ -97,12 +97,12 @@ export default function useExitNodes(node: NodeData, filter?: string) {
countryNodes.forEach( countryNodes.forEach(
// add one node per city // add one node per city
(cityNodes) => (cityNodes) =>
addBestMatchNode(cityNodes, (loc) => `${loc.Country}: ${loc.City}`) addBestMatchNode(cityNodes, (l) => `${l.Country}: ${l.City}`)
) )
// add the "Country: Best Match" node // add the "Country: Best Match" node
addBestMatchNode( addBestMatchNode(
flattenMap(countryNodes), flattenMap(countryNodes),
(loc) => `${loc.Country}: Best Match` (l) => `${l.Country}: Best Match`
) )
}) })
} }

@ -5,8 +5,8 @@
package web package web
import ( import (
"cmp"
"context" "context"
"crypto/rand"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
@ -14,20 +14,19 @@ import (
"log" "log"
"net/http" "net/http"
"net/netip" "net/netip"
"net/url"
"os" "os"
"path" "path"
"slices" "path/filepath"
"strings" "strings"
"sync" "sync"
"time" "time"
"tailscale.com/client/local" "github.com/gorilla/csrf"
"tailscale.com/client/tailscale"
"tailscale.com/client/tailscale/apitype" "tailscale.com/client/tailscale/apitype"
"tailscale.com/clientupdate"
"tailscale.com/envknob" "tailscale.com/envknob"
"tailscale.com/envknob/featureknob" "tailscale.com/envknob/featureknob"
"tailscale.com/feature"
"tailscale.com/feature/buildfeatures"
"tailscale.com/hostinfo" "tailscale.com/hostinfo"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
@ -38,7 +37,6 @@ import (
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/types/views" "tailscale.com/types/views"
"tailscale.com/util/httpm" "tailscale.com/util/httpm"
"tailscale.com/util/syspolicy/policyclient"
"tailscale.com/version" "tailscale.com/version"
"tailscale.com/version/distro" "tailscale.com/version/distro"
) )
@ -52,8 +50,7 @@ type Server struct {
mode ServerMode mode ServerMode
logf logger.Logf logf logger.Logf
polc policyclient.Client // must be non-nil lc *tailscale.LocalClient
lc *local.Client
timeNow func() time.Time timeNow func() time.Time
// devMode indicates that the server run with frontend assets // devMode indicates that the server run with frontend assets
@ -63,12 +60,6 @@ type Server struct {
cgiMode bool cgiMode bool
pathPrefix string pathPrefix string
// originOverride is the origin that the web UI is accessible from.
// This value is used in the fallback CSRF checks when Sec-Fetch-Site is not
// available. In this case the application will compare Host and Origin
// header values to determine if the request is from the same origin.
originOverride string
apiHandler http.Handler // serves api endpoints; csrf-protected apiHandler http.Handler // serves api endpoints; csrf-protected
assetsHandler http.Handler // serves frontend assets assetsHandler http.Handler // serves frontend assets
assetsCleanup func() // called from Server.Shutdown assetsCleanup func() // called from Server.Shutdown
@ -98,8 +89,8 @@ type Server struct {
type ServerMode string type ServerMode string
const ( const (
// LoginServerMode serves a read-only login client for logging a // LoginServerMode serves a readonly login client for logging a
// node into a tailnet, and viewing a read-only interface of the // node into a tailnet, and viewing a readonly interface of the
// node's current Tailscale settings. // node's current Tailscale settings.
// //
// In this mode, API calls are authenticated via platform auth. // In this mode, API calls are authenticated via platform auth.
@ -119,7 +110,7 @@ const (
// This mode restricts the app to only being assessible over Tailscale, // This mode restricts the app to only being assessible over Tailscale,
// and API calls are authenticated via browser sessions associated with // and API calls are authenticated via browser sessions associated with
// the source's Tailscale identity. If the source browser does not have // the source's Tailscale identity. If the source browser does not have
// a valid session, a read-only version of the app is displayed. // a valid session, a readonly version of the app is displayed.
ManageServerMode ServerMode = "manage" ManageServerMode ServerMode = "manage"
) )
@ -134,22 +125,18 @@ type ServerOpts struct {
// PathPrefix is the URL prefix added to requests by CGI or reverse proxy. // PathPrefix is the URL prefix added to requests by CGI or reverse proxy.
PathPrefix string PathPrefix string
// LocalClient is the local.Client to use for this web server. // LocalClient is the tailscale.LocalClient to use for this web server.
// If nil, a new one will be created. // If nil, a new one will be created.
LocalClient *local.Client LocalClient *tailscale.LocalClient
// TimeNow optionally provides a time function. // TimeNow optionally provides a time function.
// time.Now is used as default. // time.Now is used as default.
TimeNow func() time.Time TimeNow func() time.Time
// Logf optionally provides a logger function. // Logf optionally provides a logger function.
// If nil, log.Printf is used as default. // log.Printf is used as default.
Logf logger.Logf Logf logger.Logf
// PolicyClient, if non-nil, will be used to fetch policy settings.
// If nil, the default policy client will be used.
PolicyClient policyclient.Client
// The following two fields are required and used exclusively // The following two fields are required and used exclusively
// in ManageServerMode to facilitate the control server login // in ManageServerMode to facilitate the control server login
// check step for authorizing browser sessions. // check step for authorizing browser sessions.
@ -163,9 +150,6 @@ type ServerOpts struct {
// as completed. // as completed.
// This field is required for ManageServerMode mode. // This field is required for ManageServerMode mode.
WaitAuthURL func(ctx context.Context, id string, src tailcfg.NodeID) (*tailcfg.WebClientAuthResponse, error) WaitAuthURL func(ctx context.Context, id string, src tailcfg.NodeID) (*tailcfg.WebClientAuthResponse, error)
// OriginOverride specifies the origin that the web UI will be accessible from if hosted behind a reverse proxy or CGI.
OriginOverride string
} }
// NewServer constructs a new Tailscale web client server. // NewServer constructs a new Tailscale web client server.
@ -182,20 +166,18 @@ func NewServer(opts ServerOpts) (s *Server, err error) {
return nil, fmt.Errorf("invalid Mode provided") return nil, fmt.Errorf("invalid Mode provided")
} }
if opts.LocalClient == nil { if opts.LocalClient == nil {
opts.LocalClient = &local.Client{} opts.LocalClient = &tailscale.LocalClient{}
} }
s = &Server{ s = &Server{
mode: opts.Mode, mode: opts.Mode,
polc: cmp.Or(opts.PolicyClient, policyclient.Get()), logf: opts.Logf,
logf: opts.Logf, devMode: envknob.Bool("TS_DEBUG_WEB_CLIENT_DEV"),
devMode: envknob.Bool("TS_DEBUG_WEB_CLIENT_DEV"), lc: opts.LocalClient,
lc: opts.LocalClient, cgiMode: opts.CGIMode,
cgiMode: opts.CGIMode, pathPrefix: opts.PathPrefix,
pathPrefix: opts.PathPrefix, timeNow: opts.TimeNow,
timeNow: opts.TimeNow, newAuthURL: opts.NewAuthURL,
newAuthURL: opts.NewAuthURL, waitAuthURL: opts.WaitAuthURL,
waitAuthURL: opts.WaitAuthURL,
originOverride: opts.OriginOverride,
} }
if opts.PathPrefix != "" { if opts.PathPrefix != "" {
// Enforce that path prefix always has a single leading '/' // Enforce that path prefix always has a single leading '/'
@ -221,9 +203,25 @@ func NewServer(opts ServerOpts) (s *Server, err error) {
} }
s.assetsHandler, s.assetsCleanup = assetsHandler(s.devMode) s.assetsHandler, s.assetsCleanup = assetsHandler(s.devMode)
var metric string var metric string // clientmetric to report on startup
s.apiHandler, metric = s.modeAPIHandler(s.mode)
s.apiHandler = s.csrfProtect(s.apiHandler) // Create handler for "/api" requests with CSRF protection.
// We don't require secure cookies, since the web client is regularly used
// on network appliances that are served on local non-https URLs.
// The client is secured by limiting the interface it listens on,
// or by authenticating requests before they reach the web client.
csrfProtect := csrf.Protect(s.csrfKey(), csrf.Secure(false))
switch s.mode {
case LoginServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveLoginAPI))
metric = "web_login_client_initialization"
case ReadOnlyServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveLoginAPI))
metric = "web_readonly_client_initialization"
case ManageServerMode:
s.apiHandler = csrfProtect(http.HandlerFunc(s.serveAPI))
metric = "web_client_initialization"
}
// Don't block startup on reporting metric. // Don't block startup on reporting metric.
// Report in separate go routine with 5 second timeout. // Report in separate go routine with 5 second timeout.
@ -236,80 +234,6 @@ func NewServer(opts ServerOpts) (s *Server, err error) {
return s, nil return s, nil
} }
func (s *Server) csrfProtect(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// CSRF is not required for GET, HEAD, or OPTIONS requests.
if slices.Contains([]string{"GET", "HEAD", "OPTIONS"}, r.Method) {
h.ServeHTTP(w, r)
return
}
// first attempt to use Sec-Fetch-Site header (sent by all modern
// browsers to "potentially trustworthy" origins i.e. localhost or those
// served over HTTPS)
secFetchSite := r.Header.Get("Sec-Fetch-Site")
if secFetchSite == "same-origin" {
h.ServeHTTP(w, r)
return
} else if secFetchSite != "" {
http.Error(w, fmt.Sprintf("CSRF request denied with Sec-Fetch-Site %q", secFetchSite), http.StatusForbidden)
return
}
// if Sec-Fetch-Site is not available we presume we are operating over HTTP.
// We fall back to comparing the Origin & Host headers.
// use the Host header to determine the expected origin
// (use the override if set to allow for reverse proxying)
host := r.Host
if host == "" {
http.Error(w, "CSRF request denied with no Host header", http.StatusForbidden)
return
}
if s.originOverride != "" {
host = s.originOverride
}
originHeader := r.Header.Get("Origin")
if originHeader == "" {
http.Error(w, "CSRF request denied with no Origin header", http.StatusForbidden)
return
}
parsedOrigin, err := url.Parse(originHeader)
if err != nil {
http.Error(w, fmt.Sprintf("CSRF request denied with invalid Origin %q", r.Header.Get("Origin")), http.StatusForbidden)
return
}
origin := parsedOrigin.Host
if origin == "" {
http.Error(w, "CSRF request denied with no host in the Origin header", http.StatusForbidden)
return
}
if origin != host {
http.Error(w, fmt.Sprintf("CSRF request denied with mismatched Origin %q and Host %q", origin, host), http.StatusForbidden)
return
}
h.ServeHTTP(w, r)
})
}
func (s *Server) modeAPIHandler(mode ServerMode) (http.Handler, string) {
switch mode {
case LoginServerMode:
return http.HandlerFunc(s.serveLoginAPI), "web_login_client_initialization"
case ReadOnlyServerMode:
return http.HandlerFunc(s.serveLoginAPI), "web_readonly_client_initialization"
case ManageServerMode:
return http.HandlerFunc(s.serveAPI), "web_client_initialization"
default: // invalid mode
log.Fatalf("invalid mode: %v", mode)
}
return nil, ""
}
func (s *Server) Shutdown() { func (s *Server) Shutdown() {
s.logf("web.Server: shutting down") s.logf("web.Server: shutting down")
if s.assetsCleanup != nil { if s.assetsCleanup != nil {
@ -394,8 +318,7 @@ func (s *Server) requireTailscaleIP(w http.ResponseWriter, r *http.Request) (han
ipv6ServiceHost = "[" + tsaddr.TailscaleServiceIPv6String + "]" ipv6ServiceHost = "[" + tsaddr.TailscaleServiceIPv6String + "]"
) )
// allow requests on quad-100 (or ipv6 equivalent) // allow requests on quad-100 (or ipv6 equivalent)
host := strings.TrimSuffix(r.Host, ":80") if r.Host == ipv4ServiceHost || r.Host == ipv6ServiceHost {
if host == ipv4ServiceHost || host == ipv6ServiceHost {
return false return false
} }
@ -497,10 +420,6 @@ func (s *Server) authorizeRequest(w http.ResponseWriter, r *http.Request) (ok bo
// Client using system-specific auth. // Client using system-specific auth.
switch distro.Get() { switch distro.Get() {
case distro.Synology: case distro.Synology:
if !buildfeatures.HasSynology {
// Synology support not built in.
return false
}
authorized, _ := authorizeSynology(r) authorized, _ := authorizeSynology(r)
return authorized return authorized
case distro.QNAP: case distro.QNAP:
@ -515,6 +434,7 @@ func (s *Server) authorizeRequest(w http.ResponseWriter, r *http.Request) (ok bo
// It should only be called by Server.ServeHTTP, via Server.apiHandler, // It should only be called by Server.ServeHTTP, via Server.apiHandler,
// which protects the handler using gorilla csrf. // which protects the handler using gorilla csrf.
func (s *Server) serveLoginAPI(w http.ResponseWriter, r *http.Request) { func (s *Server) serveLoginAPI(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
switch { switch {
case r.URL.Path == "/api/data" && r.Method == httpm.GET: case r.URL.Path == "/api/data" && r.Method == httpm.GET:
s.serveGetNodeData(w, r) s.serveGetNodeData(w, r)
@ -637,6 +557,7 @@ func (s *Server) serveAPI(w http.ResponseWriter, r *http.Request) {
} }
} }
w.Header().Set("X-CSRF-Token", csrf.Token(r))
path := strings.TrimPrefix(r.URL.Path, "/api") path := strings.TrimPrefix(r.URL.Path, "/api")
switch { switch {
case path == "/data" && r.Method == httpm.GET: case path == "/data" && r.Method == httpm.GET:
@ -774,16 +695,16 @@ func (s *Server) serveAPIAuth(w http.ResponseWriter, r *http.Request) {
switch { switch {
case sErr != nil && errors.Is(sErr, errNotUsingTailscale): case sErr != nil && errors.Is(sErr, errNotUsingTailscale):
s.lc.IncrementCounter(r.Context(), "web_client_viewing_local", 1) s.lc.IncrementCounter(r.Context(), "web_client_viewing_local", 1)
resp.Authorized = false // restricted to the read-only view resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errNotOwner): case sErr != nil && errors.Is(sErr, errNotOwner):
s.lc.IncrementCounter(r.Context(), "web_client_viewing_not_owner", 1) s.lc.IncrementCounter(r.Context(), "web_client_viewing_not_owner", 1)
resp.Authorized = false // restricted to the read-only view resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errTaggedLocalSource): case sErr != nil && errors.Is(sErr, errTaggedLocalSource):
s.lc.IncrementCounter(r.Context(), "web_client_viewing_local_tag", 1) s.lc.IncrementCounter(r.Context(), "web_client_viewing_local_tag", 1)
resp.Authorized = false // restricted to the read-only view resp.Authorized = false // restricted to the readonly view
case sErr != nil && errors.Is(sErr, errTaggedRemoteSource): case sErr != nil && errors.Is(sErr, errTaggedRemoteSource):
s.lc.IncrementCounter(r.Context(), "web_client_viewing_remote_tag", 1) s.lc.IncrementCounter(r.Context(), "web_client_viewing_remote_tag", 1)
resp.Authorized = false // restricted to the read-only view resp.Authorized = false // restricted to the readonly view
case sErr != nil && !errors.Is(sErr, errNoSession): case sErr != nil && !errors.Is(sErr, errNoSession):
// Any other error. // Any other error.
http.Error(w, sErr.Error(), http.StatusInternalServerError) http.Error(w, sErr.Error(), http.StatusInternalServerError)
@ -883,8 +804,8 @@ type nodeData struct {
DeviceName string DeviceName string
TailnetName string // TLS cert name TailnetName string // TLS cert name
DomainName string DomainName string
IPv4 netip.Addr IPv4 string
IPv6 netip.Addr IPv6 string
OS string OS string
IPNVersion string IPNVersion string
@ -943,14 +864,10 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
return return
} }
filterRules, _ := s.lc.DebugPacketFilterRules(r.Context()) filterRules, _ := s.lc.DebugPacketFilterRules(r.Context())
ipv4, ipv6 := s.selfNodeAddresses(r, st)
data := &nodeData{ data := &nodeData{
ID: st.Self.ID, ID: st.Self.ID,
Status: st.BackendState, Status: st.BackendState,
DeviceName: strings.Split(st.Self.DNSName, ".")[0], DeviceName: strings.Split(st.Self.DNSName, ".")[0],
IPv4: ipv4,
IPv6: ipv6,
OS: st.Self.OS, OS: st.Self.OS,
IPNVersion: strings.Split(st.Version, "-")[0], IPNVersion: strings.Split(st.Version, "-")[0],
Profile: st.User[st.Self.UserID], Profile: st.User[st.Self.UserID],
@ -963,13 +880,17 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
UnraidToken: os.Getenv("UNRAID_CSRF_TOKEN"), UnraidToken: os.Getenv("UNRAID_CSRF_TOKEN"),
RunningSSHServer: prefs.RunSSH, RunningSSHServer: prefs.RunSSH,
URLPrefix: strings.TrimSuffix(s.pathPrefix, "/"), URLPrefix: strings.TrimSuffix(s.pathPrefix, "/"),
ControlAdminURL: prefs.AdminPageURL(s.polc), ControlAdminURL: prefs.AdminPageURL(),
LicensesURL: licenses.LicensesURL(), LicensesURL: licenses.LicensesURL(),
Features: availableFeatures(), Features: availableFeatures(),
ACLAllowsAnyIncomingTraffic: s.aclsAllowAccess(filterRules), ACLAllowsAnyIncomingTraffic: s.aclsAllowAccess(filterRules),
} }
ipv4, ipv6 := s.selfNodeAddresses(r, st)
data.IPv4 = ipv4.String()
data.IPv6 = ipv6.String()
if hostinfo.GetEnvType() == hostinfo.HomeAssistantAddOn && data.URLPrefix == "" { if hostinfo.GetEnvType() == hostinfo.HomeAssistantAddOn && data.URLPrefix == "" {
// X-Ingress-Path is the path prefix in use for Home Assistant // X-Ingress-Path is the path prefix in use for Home Assistant
// https://developers.home-assistant.io/docs/add-ons/presentation#ingress // https://developers.home-assistant.io/docs/add-ons/presentation#ingress
@ -983,18 +904,9 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
data.ClientVersion = cv data.ClientVersion = cv
} }
profile, _, err := s.lc.ProfileStatus(r.Context()) if st.CurrentTailnet != nil {
if err != nil { data.TailnetName = st.CurrentTailnet.MagicDNSSuffix
s.logf("error fetching profiles: %v", err) data.DomainName = st.CurrentTailnet.Name
// If for some reason we can't fetch profiles,
// continue to use st.CurrentTailnet if set.
if st.CurrentTailnet != nil {
data.TailnetName = st.CurrentTailnet.MagicDNSSuffix
data.DomainName = st.CurrentTailnet.Name
}
} else {
data.TailnetName = profile.NetworkProfile.MagicDNSName
data.DomainName = profile.NetworkProfile.DisplayNameOrDefault()
} }
if st.Self.Tags != nil { if st.Self.Tags != nil {
data.Tags = st.Self.Tags.AsSlice() data.Tags = st.Self.Tags.AsSlice()
@ -1054,7 +966,7 @@ func availableFeatures() map[string]bool {
"advertise-routes": true, // available on all platforms "advertise-routes": true, // available on all platforms
"use-exit-node": featureknob.CanUseExitNode() == nil, "use-exit-node": featureknob.CanUseExitNode() == nil,
"ssh": featureknob.CanRunTailscaleSSH() == nil, "ssh": featureknob.CanRunTailscaleSSH() == nil,
"auto-update": version.IsUnstableBuild() && feature.CanAutoUpdate(), "auto-update": version.IsUnstableBuild() && clientupdate.CanAutoUpdate(),
} }
return features return features
} }
@ -1346,6 +1258,37 @@ func (s *Server) proxyRequestToLocalAPI(w http.ResponseWriter, r *http.Request)
} }
} }
// csrfKey returns a key that can be used for CSRF protection.
// If an error occurs during key creation, the error is logged and the active process terminated.
// If the server is running in CGI mode, the key is cached to disk and reused between requests.
// If an error occurs during key storage, the error is logged and the active process terminated.
func (s *Server) csrfKey() []byte {
csrfFile := filepath.Join(os.TempDir(), "tailscale-web-csrf.key")
// if running in CGI mode, try to read from disk, but ignore errors
if s.cgiMode {
key, _ := os.ReadFile(csrfFile)
if len(key) == 32 {
return key
}
}
// create a new key
key := make([]byte, 32)
if _, err := rand.Read(key); err != nil {
log.Fatalf("error generating CSRF key: %v", err)
}
// if running in CGI mode, try to write the newly created key to disk, and exit if it fails.
if s.cgiMode {
if err := os.WriteFile(csrfFile, key, 0600); err != nil {
log.Fatalf("unable to store CSRF key: %v", err)
}
}
return key
}
// enforcePrefix returns a HandlerFunc that enforces a given path prefix is used in requests, // enforcePrefix returns a HandlerFunc that enforces a given path prefix is used in requests,
// then strips it before invoking h. // then strips it before invoking h.
// Unlike http.StripPrefix, it does not return a 404 if the prefix is not present. // Unlike http.StripPrefix, it does not return a 404 if the prefix is not present.

@ -20,7 +20,7 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"tailscale.com/client/local" "tailscale.com/client/tailscale"
"tailscale.com/client/tailscale/apitype" "tailscale.com/client/tailscale/apitype"
"tailscale.com/ipn" "tailscale.com/ipn"
"tailscale.com/ipn/ipnstate" "tailscale.com/ipn/ipnstate"
@ -28,7 +28,6 @@ import (
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/views" "tailscale.com/types/views"
"tailscale.com/util/httpm" "tailscale.com/util/httpm"
"tailscale.com/util/syspolicy/policyclient"
) )
func TestQnapAuthnURL(t *testing.T) { func TestQnapAuthnURL(t *testing.T) {
@ -121,7 +120,7 @@ func TestServeAPI(t *testing.T) {
s := &Server{ s := &Server{
mode: ManageServerMode, mode: ManageServerMode,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: time.Now, timeNow: time.Now,
} }
@ -289,7 +288,7 @@ func TestGetTailscaleBrowserSession(t *testing.T) {
s := &Server{ s := &Server{
timeNow: time.Now, timeNow: time.Now,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
} }
// Add some browser sessions to cache state. // Add some browser sessions to cache state.
@ -458,7 +457,7 @@ func TestAuthorizeRequest(t *testing.T) {
s := &Server{ s := &Server{
mode: ManageServerMode, mode: ManageServerMode,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: time.Now, timeNow: time.Now,
} }
validCookie := "ts-cookie" validCookie := "ts-cookie"
@ -573,11 +572,10 @@ func TestServeAuth(t *testing.T) {
s := &Server{ s := &Server{
mode: ManageServerMode, mode: ManageServerMode,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: func() time.Time { return timeNow }, timeNow: func() time.Time { return timeNow },
newAuthURL: mockNewAuthURL, newAuthURL: mockNewAuthURL,
waitAuthURL: mockWaitAuthURL, waitAuthURL: mockWaitAuthURL,
polc: policyclient.NoPolicyClient{},
} }
successCookie := "ts-cookie-success" successCookie := "ts-cookie-success"
@ -916,7 +914,7 @@ func TestServeAPIAuthMetricLogging(t *testing.T) {
s := &Server{ s := &Server{
mode: ManageServerMode, mode: ManageServerMode,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: func() time.Time { return timeNow }, timeNow: func() time.Time { return timeNow },
newAuthURL: mockNewAuthURL, newAuthURL: mockNewAuthURL,
waitAuthURL: mockWaitAuthURL, waitAuthURL: mockWaitAuthURL,
@ -1128,7 +1126,7 @@ func TestRequireTailscaleIP(t *testing.T) {
s := &Server{ s := &Server{
mode: ManageServerMode, mode: ManageServerMode,
lc: &local.Client{Dial: lal.Dial}, lc: &tailscale.LocalClient{Dial: lal.Dial},
timeNow: time.Now, timeNow: time.Now,
logf: t.Logf, logf: t.Logf,
} }
@ -1177,16 +1175,6 @@ func TestRequireTailscaleIP(t *testing.T) {
target: "http://[fd7a:115c:a1e0::53]/", target: "http://[fd7a:115c:a1e0::53]/",
wantHandled: false, wantHandled: false,
}, },
{
name: "quad-100:80",
target: "http://100.100.100.100:80/",
wantHandled: false,
},
{
name: "ipv6-service-addr:80",
target: "http://[fd7a:115c:a1e0::53]:80/",
wantHandled: false,
},
} }
for _, tt := range tests { for _, tt := range tests {
@ -1489,101 +1477,3 @@ func mockWaitAuthURL(_ context.Context, id string, src tailcfg.NodeID) (*tailcfg
return nil, errors.New("unknown id") return nil, errors.New("unknown id")
} }
} }
func TestCSRFProtect(t *testing.T) {
tests := []struct {
name string
method string
secFetchSite string
host string
origin string
originOverride string
wantError bool
}{
{
name: "GET requests with no header are allowed",
method: "GET",
},
{
name: "POST requests with same-origin are allowed",
method: "POST",
secFetchSite: "same-origin",
},
{
name: "POST requests with cross-site are not allowed",
method: "POST",
secFetchSite: "cross-site",
wantError: true,
},
{
name: "POST requests with unknown sec-fetch-site values are not allowed",
method: "POST",
secFetchSite: "new-unknown-value",
wantError: true,
},
{
name: "POST requests with none are not allowed",
method: "POST",
secFetchSite: "none",
wantError: true,
},
{
name: "POST requests with no sec-fetch-site header but matching host and origin are allowed",
method: "POST",
host: "example.com",
origin: "https://example.com",
},
{
name: "POST requests with no sec-fetch-site and non-matching host and origin are not allowed",
method: "POST",
host: "example.com",
origin: "https://example.net",
wantError: true,
},
{
name: "POST requests with no sec-fetch-site and and origin that matches the override are allowed",
method: "POST",
originOverride: "example.net",
host: "internal.example.foo", // Host can be changed by reverse proxies
origin: "http://example.net",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "OK")
})
s := &Server{
originOverride: tt.originOverride,
}
withCSRF := s.csrfProtect(handler)
r := httptest.NewRequest(tt.method, "http://example.com/", nil)
if tt.secFetchSite != "" {
r.Header.Set("Sec-Fetch-Site", tt.secFetchSite)
}
if tt.host != "" {
r.Host = tt.host
}
if tt.origin != "" {
r.Header.Set("Origin", tt.origin)
}
w := httptest.NewRecorder()
withCSRF.ServeHTTP(w, r)
res := w.Result()
defer res.Body.Close()
if tt.wantError {
if res.StatusCode != http.StatusForbidden {
t.Errorf("expected status forbidden, got %v", res.StatusCode)
}
return
}
if res.StatusCode != http.StatusOK {
t.Errorf("expected status ok, got %v", res.StatusCode)
}
})
}
}

File diff suppressed because it is too large Load Diff

@ -27,9 +27,6 @@ import (
"strconv" "strconv"
"strings" "strings"
"tailscale.com/feature"
"tailscale.com/hostinfo"
"tailscale.com/types/lazy"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/cmpver" "tailscale.com/util/cmpver"
"tailscale.com/version" "tailscale.com/version"
@ -172,12 +169,6 @@ func NewUpdater(args Arguments) (*Updater, error) {
type updateFunction func() error type updateFunction func() error
func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) { func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
hi := hostinfo.New()
// We don't know how to update custom tsnet binaries, it's up to the user.
if hi.Package == "tsnet" {
return nil, false
}
switch runtime.GOOS { switch runtime.GOOS {
case "windows": case "windows":
return up.updateWindows, true return up.updateWindows, true
@ -251,17 +242,9 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
return nil, false return nil, false
} }
var canAutoUpdateCache lazy.SyncValue[bool] // CanAutoUpdate reports whether auto-updating via the clientupdate package
func init() {
feature.HookCanAutoUpdate.Set(canAutoUpdate)
}
// canAutoUpdate reports whether auto-updating via the clientupdate package
// is supported for the current os/distro. // is supported for the current os/distro.
func canAutoUpdate() bool { return canAutoUpdateCache.Get(canAutoUpdateUncached) } func CanAutoUpdate() bool {
func canAutoUpdateUncached() bool {
if version.IsMacSysExt() { if version.IsMacSysExt() {
// Macsys uses Sparkle for auto-updates, which doesn't have an update // Macsys uses Sparkle for auto-updates, which doesn't have an update
// function in this package. // function in this package.
@ -418,13 +401,13 @@ func parseSynoinfo(path string) (string, error) {
// Extract the CPU in the middle (88f6282 in the above example). // Extract the CPU in the middle (88f6282 in the above example).
s := bufio.NewScanner(f) s := bufio.NewScanner(f)
for s.Scan() { for s.Scan() {
line := s.Text() l := s.Text()
if !strings.HasPrefix(line, "unique=") { if !strings.HasPrefix(l, "unique=") {
continue continue
} }
parts := strings.SplitN(line, "_", 3) parts := strings.SplitN(l, "_", 3)
if len(parts) != 3 { if len(parts) != 3 {
return "", fmt.Errorf(`malformed %q: found %q, expected format like 'unique="synology_$cpu_$model'`, path, line) return "", fmt.Errorf(`malformed %q: found %q, expected format like 'unique="synology_$cpu_$model'`, path, l)
} }
return parts[1], nil return parts[1], nil
} }

@ -16,7 +16,6 @@ import (
"path/filepath" "path/filepath"
"runtime" "runtime"
"strings" "strings"
"time"
"github.com/google/uuid" "github.com/google/uuid"
"golang.org/x/sys/windows" "golang.org/x/sys/windows"
@ -30,12 +29,11 @@ const (
// tailscale.exe process from running before the msiexec process runs and // tailscale.exe process from running before the msiexec process runs and
// tries to overwrite ourselves. // tries to overwrite ourselves.
winMSIEnv = "TS_UPDATE_WIN_MSI" winMSIEnv = "TS_UPDATE_WIN_MSI"
// winVersionEnv is the environment variable that is set along with // winExePathEnv is the environment variable that is set along with
// winMSIEnv and carries the version of tailscale that is being installed. // winMSIEnv and carries the full path of the calling tailscale.exe binary.
// It is used for logging purposes. // It is used to re-launch the GUI process (tailscale-ipn.exe) after
winVersionEnv = "TS_UPDATE_WIN_VERSION" // install is complete.
// updaterPrefix is the prefix for the temporary executable created by [makeSelfCopy]. winExePathEnv = "TS_UPDATE_WIN_EXE_PATH"
updaterPrefix = "tailscale-updater"
) )
func makeSelfCopy() (origPathExe, tmpPathExe string, err error) { func makeSelfCopy() (origPathExe, tmpPathExe string, err error) {
@ -48,7 +46,7 @@ func makeSelfCopy() (origPathExe, tmpPathExe string, err error) {
return "", "", err return "", "", err
} }
defer f.Close() defer f.Close()
f2, err := os.CreateTemp("", updaterPrefix+"-*.exe") f2, err := os.CreateTemp("", "tailscale-updater-*.exe")
if err != nil { if err != nil {
return "", "", err return "", "", err
} }
@ -73,17 +71,6 @@ func verifyAuthenticode(path string) error {
return authenticode.Verify(path, certSubjectTailscale) return authenticode.Verify(path, certSubjectTailscale)
} }
func isTSGUIPresent() bool {
us, err := os.Executable()
if err != nil {
return false
}
tsgui := filepath.Join(filepath.Dir(us), "tsgui.dll")
_, err = os.Stat(tsgui)
return err == nil
}
func (up *Updater) updateWindows() error { func (up *Updater) updateWindows() error {
if msi := os.Getenv(winMSIEnv); msi != "" { if msi := os.Getenv(winMSIEnv); msi != "" {
// stdout/stderr from this part of the install could be lost since the // stdout/stderr from this part of the install could be lost since the
@ -137,15 +124,7 @@ you can run the command prompt as Administrator one of these ways:
return err return err
} }
up.cleanupOldDownloads(filepath.Join(msiDir, "*.msi")) up.cleanupOldDownloads(filepath.Join(msiDir, "*.msi"))
pkgsPath := fmt.Sprintf("%s/tailscale-setup-%s-%s.msi", up.Track, ver, arch)
qualifiers := []string{ver, arch}
// TODO(aaron): Temporary hack so autoupdate still works on winui builds;
// remove when we enable winui by default on the unstable track.
if isTSGUIPresent() {
qualifiers = append(qualifiers, "winui")
}
pkgsPath := fmt.Sprintf("%s/tailscale-setup-%s.msi", up.Track, strings.Join(qualifiers, "-"))
msiTarget := filepath.Join(msiDir, path.Base(pkgsPath)) msiTarget := filepath.Join(msiDir, path.Base(pkgsPath))
if err := up.downloadURLToFile(pkgsPath, msiTarget); err != nil { if err := up.downloadURLToFile(pkgsPath, msiTarget); err != nil {
return err return err
@ -158,8 +137,8 @@ you can run the command prompt as Administrator one of these ways:
up.Logf("authenticode verification succeeded") up.Logf("authenticode verification succeeded")
up.Logf("making tailscale.exe copy to switch to...") up.Logf("making tailscale.exe copy to switch to...")
up.cleanupOldDownloads(filepath.Join(os.TempDir(), updaterPrefix+"-*.exe")) up.cleanupOldDownloads(filepath.Join(os.TempDir(), "tailscale-updater-*.exe"))
_, selfCopy, err := makeSelfCopy() selfOrig, selfCopy, err := makeSelfCopy()
if err != nil { if err != nil {
return err return err
} }
@ -167,7 +146,7 @@ you can run the command prompt as Administrator one of these ways:
up.Logf("running tailscale.exe copy for final install...") up.Logf("running tailscale.exe copy for final install...")
cmd := exec.Command(selfCopy, "update") cmd := exec.Command(selfCopy, "update")
cmd.Env = append(os.Environ(), winMSIEnv+"="+msiTarget, winVersionEnv+"="+ver) cmd.Env = append(os.Environ(), winMSIEnv+"="+msiTarget, winExePathEnv+"="+selfOrig)
cmd.Stdout = up.Stderr cmd.Stdout = up.Stderr
cmd.Stderr = up.Stderr cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin cmd.Stdin = os.Stdin
@ -183,62 +162,23 @@ you can run the command prompt as Administrator one of these ways:
func (up *Updater) installMSI(msi string) error { func (up *Updater) installMSI(msi string) error {
var err error var err error
for tries := 0; tries < 2; tries++ { for tries := 0; tries < 2; tries++ {
// msiexec.exe requires exclusive access to the log file, so create a dedicated one for each run. cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn")
installLogPath := up.startNewLogFile("tailscale-installer", os.Getenv(winVersionEnv))
up.Logf("Install log: %s", installLogPath)
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn", "/L*v", installLogPath)
cmd.Dir = filepath.Dir(msi) cmd.Dir = filepath.Dir(msi)
cmd.Stdout = up.Stdout cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin cmd.Stdin = os.Stdin
err = cmd.Run() err = cmd.Run()
switch err := err.(type) { if err == nil {
case nil: break
// Success.
return nil
case *exec.ExitError:
// For possible error codes returned by Windows Installer, see
// https://web.archive.org/web/20250409144914/https://learn.microsoft.com/en-us/windows/win32/msi/error-codes
switch windows.Errno(err.ExitCode()) {
case windows.ERROR_SUCCESS_REBOOT_REQUIRED:
// In most cases, updating Tailscale should not require a reboot.
// If it does, it might be because we failed to close the GUI
// and the installer couldn't replace its executable.
// The old GUI will continue to run until the next reboot.
// Not ideal, but also not a retryable error.
up.Logf("[unexpected] reboot required")
return nil
case windows.ERROR_SUCCESS_REBOOT_INITIATED:
// Same as above, but perhaps the device is configured to prompt
// the user to reboot and the user has chosen to reboot now.
up.Logf("[unexpected] reboot initiated")
return nil
case windows.ERROR_INSTALL_ALREADY_RUNNING:
// The Windows Installer service is currently busy.
// It could be our own install initiated by user/MDM/GP, another MSI install or perhaps a Windows Update install.
// Anyway, we can't do anything about it right now. The user (or tailscaled) can retry later.
// Retrying now will likely fail, and is risky since we might uninstall the current version
// and then fail to install the new one, leaving the user with no Tailscale at all.
//
// TODO(nickkhyl,awly): should we check if this is actually a downgrade before uninstalling the current version?
// Also, maybe keep retrying the install longer if we uninstalled the current version due to a failed install attempt?
up.Logf("another installation is already in progress")
return err
}
default:
// Everything else is a retryable error.
} }
up.Logf("Install attempt failed: %v", err) up.Logf("Install attempt failed: %v", err)
uninstallVersion := up.currentVersion uninstallVersion := up.currentVersion
if v := os.Getenv("TS_DEBUG_UNINSTALL_VERSION"); v != "" { if v := os.Getenv("TS_DEBUG_UNINSTALL_VERSION"); v != "" {
uninstallVersion = v uninstallVersion = v
} }
uninstallLogPath := up.startNewLogFile("tailscale-uninstaller", uninstallVersion)
// Assume it's a downgrade, which msiexec won't permit. Uninstall our current version first. // Assume it's a downgrade, which msiexec won't permit. Uninstall our current version first.
up.Logf("Uninstalling current version %q for downgrade...", uninstallVersion) up.Logf("Uninstalling current version %q for downgrade...", uninstallVersion)
up.Logf("Uninstall log: %s", uninstallLogPath) cmd = exec.Command("msiexec.exe", "/x", msiUUIDForVersion(uninstallVersion), "/norestart", "/qn")
cmd = exec.Command("msiexec.exe", "/x", msiUUIDForVersion(uninstallVersion), "/norestart", "/qn", "/L*v", uninstallLogPath)
cmd.Stdout = up.Stdout cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin cmd.Stdin = os.Stdin
@ -265,14 +205,12 @@ func (up *Updater) switchOutputToFile() (io.Closer, error) {
var logFilePath string var logFilePath string
exePath, err := os.Executable() exePath, err := os.Executable()
if err != nil { if err != nil {
logFilePath = up.startNewLogFile(updaterPrefix, os.Getenv(winVersionEnv)) logFilePath = filepath.Join(os.TempDir(), "tailscale-updater.log")
} else { } else {
// Use the same suffix as the self-copy executable. logFilePath = strings.TrimSuffix(exePath, ".exe") + ".log"
suffix := strings.TrimSuffix(strings.TrimPrefix(filepath.Base(exePath), updaterPrefix), ".exe")
logFilePath = up.startNewLogFile(updaterPrefix, os.Getenv(winVersionEnv)+suffix)
} }
up.Logf("writing update output to: %s", logFilePath) up.Logf("writing update output to %q", logFilePath)
logFile, err := os.Create(logFilePath) logFile, err := os.Create(logFilePath)
if err != nil { if err != nil {
return nil, err return nil, err
@ -285,20 +223,3 @@ func (up *Updater) switchOutputToFile() (io.Closer, error) {
up.Stderr = logFile up.Stderr = logFile
return logFile, nil return logFile, nil
} }
// startNewLogFile returns a name for a new log file.
// It cleans up any old log files with the same baseNamePrefix.
func (up *Updater) startNewLogFile(baseNamePrefix, baseNameSuffix string) string {
baseName := fmt.Sprintf("%s-%s-%s.log", baseNamePrefix,
time.Now().Format("20060102T150405"), baseNameSuffix)
dir := filepath.Join(os.Getenv("ProgramData"), "Tailscale", "Logs")
if err := os.MkdirAll(dir, 0700); err != nil {
up.Logf("failed to create log directory: %v", err)
return filepath.Join(os.TempDir(), baseName)
}
// TODO(nickkhyl): preserve up to N old log files?
up.cleanupOldDownloads(filepath.Join(dir, baseNamePrefix+"-*.log"))
return filepath.Join(dir, baseName)
}

@ -55,7 +55,7 @@ import (
"github.com/hdevalence/ed25519consensus" "github.com/hdevalence/ed25519consensus"
"golang.org/x/crypto/blake2s" "golang.org/x/crypto/blake2s"
"tailscale.com/feature" "tailscale.com/net/tshttpproxy"
"tailscale.com/types/logger" "tailscale.com/types/logger"
"tailscale.com/util/httpm" "tailscale.com/util/httpm"
"tailscale.com/util/must" "tailscale.com/util/must"
@ -330,15 +330,9 @@ func fetch(url string, limit int64) ([]byte, error) {
// limit bytes. On success, the returned value is a BLAKE2s hash of the file. // limit bytes. On success, the returned value is a BLAKE2s hash of the file.
func (c *Client) download(ctx context.Context, url, dst string, limit int64) ([]byte, int64, error) { func (c *Client) download(ctx context.Context, url, dst string, limit int64) ([]byte, int64, error) {
tr := http.DefaultTransport.(*http.Transport).Clone() tr := http.DefaultTransport.(*http.Transport).Clone()
tr.Proxy = feature.HookProxyFromEnvironment.GetOrNil() tr.Proxy = tshttpproxy.ProxyFromEnvironment
defer tr.CloseIdleConnections() defer tr.CloseIdleConnections()
hc := &http.Client{ hc := &http.Client{Transport: tr}
Transport: tr,
CheckRedirect: func(r *http.Request, via []*http.Request) error {
c.logf("Download redirected to %q", r.URL)
return nil
},
}
quickCtx, cancel := context.WithTimeout(ctx, 30*time.Second) quickCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel() defer cancel()

@ -18,12 +18,12 @@ var (
) )
func usage() { func usage() {
fmt.Fprint(os.Stderr, ` fmt.Fprintf(os.Stderr, `
usage: addlicense -file FILE <subcommand args...> usage: addlicense -file FILE <subcommand args...>
`[1:]) `[1:])
flag.PrintDefaults() flag.PrintDefaults()
fmt.Fprint(os.Stderr, ` fmt.Fprintf(os.Stderr, `
addlicense adds a Tailscale license to the beginning of file. addlicense adds a Tailscale license to the beginning of file.
It is intended for use with 'go generate', so it also runs a subcommand, It is intended for use with 'go generate', so it also runs a subcommand,

@ -1,372 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// cigocacher is an opinionated-to-Tailscale client for gocached. It connects
// at a URL like "https://ci-gocached-azure-1.corp.ts.net:31364", but that is
// stored in a GitHub actions variable so that its hostname can be updated for
// all branches at the same time in sync with the actual infrastructure.
//
// It authenticates using GitHub OIDC tokens, and all HTTP errors are ignored
// so that its failure mode is just that builds get slower and fall back to
// disk-only cache.
package main
import (
"bytes"
"context"
jsonv1 "encoding/json"
"errors"
"flag"
"fmt"
"io"
"log"
"net"
"net/http"
"net/url"
"os"
"path/filepath"
"runtime/debug"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/bradfitz/go-tool-cache/cacheproc"
"github.com/bradfitz/go-tool-cache/cachers"
)
func main() {
var (
version = flag.Bool("version", false, "print version and exit")
auth = flag.Bool("auth", false, "auth with cigocached and exit, printing the access token as output")
stats = flag.Bool("stats", false, "fetch and print cigocached stats and exit")
token = flag.String("token", "", "the cigocached access token to use, as created using --auth")
srvURL = flag.String("cigocached-url", "", "optional cigocached URL (scheme, host, and port). Empty means to not use one.")
srvHostDial = flag.String("cigocached-host", "", "optional cigocached host to dial instead of the host in the provided --cigocached-url. Useful for public TLS certs on private addresses.")
dir = flag.String("cache-dir", "", "cache directory; empty means automatic")
verbose = flag.Bool("verbose", false, "enable verbose logging")
)
flag.Parse()
if *version {
info, ok := debug.ReadBuildInfo()
if !ok {
log.Fatal("no build info")
}
var (
rev string
dirty bool
)
for _, s := range info.Settings {
switch s.Key {
case "vcs.revision":
rev = s.Value
case "vcs.modified":
dirty, _ = strconv.ParseBool(s.Value)
}
}
if dirty {
rev += "-dirty"
}
fmt.Println(rev)
return
}
var srvHost string
if *srvHostDial != "" && *srvURL != "" {
u, err := url.Parse(*srvURL)
if err != nil {
log.Fatal(err)
}
srvHost = u.Hostname()
}
if *auth {
if *srvURL == "" {
log.Print("--cigocached-url is empty, skipping auth")
return
}
tk, err := fetchAccessToken(httpClient(srvHost, *srvHostDial), os.Getenv("ACTIONS_ID_TOKEN_REQUEST_URL"), os.Getenv("ACTIONS_ID_TOKEN_REQUEST_TOKEN"), *srvURL)
if err != nil {
log.Printf("error fetching access token, skipping auth: %v", err)
return
}
fmt.Println(tk)
return
}
if *stats {
if *srvURL == "" {
log.Fatal("--cigocached-url is empty; cannot fetch stats")
}
tk := *token
if tk == "" {
log.Fatal("--token is empty; cannot fetch stats")
}
c := &gocachedClient{
baseURL: *srvURL,
cl: httpClient(srvHost, *srvHostDial),
accessToken: tk,
verbose: *verbose,
}
stats, err := c.fetchStats()
if err != nil {
log.Fatalf("error fetching gocached stats: %v", err)
}
fmt.Println(stats)
return
}
if *dir == "" {
d, err := os.UserCacheDir()
if err != nil {
log.Fatal(err)
}
*dir = filepath.Join(d, "go-cacher")
log.Printf("Defaulting to cache dir %v ...", *dir)
}
if err := os.MkdirAll(*dir, 0750); err != nil {
log.Fatal(err)
}
c := &cigocacher{
disk: &cachers.DiskCache{
Dir: *dir,
Verbose: *verbose,
},
verbose: *verbose,
}
if *srvURL != "" {
if *verbose {
log.Printf("Using cigocached at %s", *srvURL)
}
c.gocached = &gocachedClient{
baseURL: *srvURL,
cl: httpClient(srvHost, *srvHostDial),
accessToken: *token,
verbose: *verbose,
}
}
var p *cacheproc.Process
p = &cacheproc.Process{
Close: func() error {
if c.verbose {
log.Printf("gocacheprog: closing; %d gets (%d hits, %d misses, %d errors); %d puts (%d errors)",
p.Gets.Load(), p.GetHits.Load(), p.GetMisses.Load(), p.GetErrors.Load(), p.Puts.Load(), p.PutErrors.Load())
}
return c.close()
},
Get: c.get,
Put: c.put,
}
if err := p.Run(); err != nil {
log.Fatal(err)
}
}
func httpClient(srvHost, srvHostDial string) *http.Client {
if srvHost == "" || srvHostDial == "" {
return http.DefaultClient
}
return &http.Client{
Transport: &http.Transport{
DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) {
if host, port, err := net.SplitHostPort(addr); err == nil && host == srvHost {
// This allows us to serve a publicly trusted TLS cert
// while also minimising latency by explicitly using a
// private network address.
addr = net.JoinHostPort(srvHostDial, port)
}
var d net.Dialer
return d.DialContext(ctx, network, addr)
},
},
}
}
type cigocacher struct {
disk *cachers.DiskCache
gocached *gocachedClient
verbose bool
getNanos atomic.Int64 // total nanoseconds spent in gets
putNanos atomic.Int64 // total nanoseconds spent in puts
getHTTP atomic.Int64 // HTTP get requests made
getHTTPBytes atomic.Int64 // HTTP get bytes transferred
getHTTPHits atomic.Int64 // HTTP get hits
getHTTPMisses atomic.Int64 // HTTP get misses
getHTTPErrors atomic.Int64 // HTTP get errors ignored on best-effort basis
getHTTPNanos atomic.Int64 // total nanoseconds spent in HTTP gets
putHTTP atomic.Int64 // HTTP put requests made
putHTTPBytes atomic.Int64 // HTTP put bytes transferred
putHTTPErrors atomic.Int64 // HTTP put errors ignored on best-effort basis
putHTTPNanos atomic.Int64 // total nanoseconds spent in HTTP puts
}
func (c *cigocacher) get(ctx context.Context, actionID string) (outputID, diskPath string, err error) {
t0 := time.Now()
defer func() {
c.getNanos.Add(time.Since(t0).Nanoseconds())
}()
if c.gocached == nil {
return c.disk.Get(ctx, actionID)
}
outputID, diskPath, err = c.disk.Get(ctx, actionID)
if err == nil && outputID != "" {
return outputID, diskPath, nil
}
c.getHTTP.Add(1)
t0HTTP := time.Now()
defer func() {
c.getHTTPNanos.Add(time.Since(t0HTTP).Nanoseconds())
}()
outputID, res, err := c.gocached.get(ctx, actionID)
if err != nil {
c.getHTTPErrors.Add(1)
return "", "", nil
}
if outputID == "" || res == nil {
c.getHTTPMisses.Add(1)
return "", "", nil
}
defer res.Body.Close()
diskPath, err = put(c.disk, actionID, outputID, res.ContentLength, res.Body)
if err != nil {
return "", "", fmt.Errorf("error filling disk cache from HTTP: %w", err)
}
c.getHTTPHits.Add(1)
c.getHTTPBytes.Add(res.ContentLength)
return outputID, diskPath, nil
}
func (c *cigocacher) put(ctx context.Context, actionID, outputID string, size int64, r io.Reader) (diskPath string, err error) {
t0 := time.Now()
defer func() {
c.putNanos.Add(time.Since(t0).Nanoseconds())
}()
if c.gocached == nil {
return put(c.disk, actionID, outputID, size, r)
}
c.putHTTP.Add(1)
var diskReader, httpReader io.Reader
tee := &bestEffortTeeReader{r: r}
if size == 0 {
// Special case the empty file so NewRequest sets "Content-Length: 0",
// as opposed to thinking we didn't set it and not being able to sniff its size
// from the type.
diskReader, httpReader = bytes.NewReader(nil), bytes.NewReader(nil)
} else {
pr, pw := io.Pipe()
defer pw.Close()
// The diskReader is in the driving seat. We will try to forward data
// to httpReader as well, but only best-effort.
diskReader = tee
tee.w = pw
httpReader = pr
}
httpErrCh := make(chan error)
go func() {
t0HTTP := time.Now()
defer func() {
c.putHTTPNanos.Add(time.Since(t0HTTP).Nanoseconds())
}()
httpErrCh <- c.gocached.put(ctx, actionID, outputID, size, httpReader)
}()
diskPath, err = put(c.disk, actionID, outputID, size, diskReader)
if err != nil {
return "", fmt.Errorf("error writing to disk cache: %w", errors.Join(err, tee.err))
}
select {
case err := <-httpErrCh:
if err != nil {
c.putHTTPErrors.Add(1)
} else {
c.putHTTPBytes.Add(size)
}
case <-ctx.Done():
}
return diskPath, nil
}
func (c *cigocacher) close() error {
if !c.verbose || c.gocached == nil {
return nil
}
log.Printf("cigocacher HTTP stats: %d gets (%.1fMiB, %.2fs, %d hits, %d misses, %d errors ignored); %d puts (%.1fMiB, %.2fs, %d errors ignored)",
c.getHTTP.Load(), float64(c.getHTTPBytes.Load())/float64(1<<20), float64(c.getHTTPNanos.Load())/float64(time.Second), c.getHTTPHits.Load(), c.getHTTPMisses.Load(), c.getHTTPErrors.Load(),
c.putHTTP.Load(), float64(c.putHTTPBytes.Load())/float64(1<<20), float64(c.putHTTPNanos.Load())/float64(time.Second), c.putHTTPErrors.Load())
stats, err := c.gocached.fetchStats()
if err != nil {
log.Printf("error fetching gocached stats: %v", err)
} else {
log.Printf("gocached session stats: %s", stats)
}
return nil
}
func fetchAccessToken(cl *http.Client, idTokenURL, idTokenRequestToken, gocachedURL string) (string, error) {
req, err := http.NewRequest("GET", idTokenURL+"&audience=gocached", nil)
if err != nil {
return "", err
}
req.Header.Set("Authorization", "Bearer "+idTokenRequestToken)
resp, err := cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
type idTokenResp struct {
Value string `json:"value"`
}
var idToken idTokenResp
if err := jsonv1.NewDecoder(resp.Body).Decode(&idToken); err != nil {
return "", err
}
req, _ = http.NewRequest("POST", gocachedURL+"/auth/exchange-token", strings.NewReader(`{"jwt":"`+idToken.Value+`"}`))
req.Header.Set("Content-Type", "application/json")
resp, err = cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
type accessTokenResp struct {
AccessToken string `json:"access_token"`
}
var accessToken accessTokenResp
if err := jsonv1.NewDecoder(resp.Body).Decode(&accessToken); err != nil {
return "", err
}
return accessToken.AccessToken, nil
}
type bestEffortTeeReader struct {
r io.Reader
w io.WriteCloser
err error
}
func (t *bestEffortTeeReader) Read(p []byte) (int, error) {
n, err := t.r.Read(p)
if n > 0 && t.w != nil {
if _, err := t.w.Write(p[:n]); err != nil {
t.err = errors.Join(err, t.w.Close())
t.w = nil
}
}
return n, err
}

@ -1,88 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os"
"path/filepath"
"time"
"github.com/bradfitz/go-tool-cache/cachers"
)
// indexEntry is the metadata that DiskCache stores on disk for an ActionID.
type indexEntry struct {
Version int `json:"v"`
OutputID string `json:"o"`
Size int64 `json:"n"`
TimeNanos int64 `json:"t"`
}
func validHex(x string) bool {
if len(x) < 4 || len(x) > 100 {
return false
}
for _, b := range x {
if b >= '0' && b <= '9' || b >= 'a' && b <= 'f' {
continue
}
return false
}
return true
}
// put is like dc.Put but refactored to support safe concurrent writes on Windows.
// TODO(tomhjp): upstream these changes to go-tool-cache once they look stable.
func put(dc *cachers.DiskCache, actionID, outputID string, size int64, body io.Reader) (diskPath string, _ error) {
if len(actionID) < 4 || len(outputID) < 4 {
return "", fmt.Errorf("actionID and outputID must be at least 4 characters long")
}
if !validHex(actionID) {
log.Printf("diskcache: got invalid actionID %q", actionID)
return "", errors.New("actionID must be hex")
}
if !validHex(outputID) {
log.Printf("diskcache: got invalid outputID %q", outputID)
return "", errors.New("outputID must be hex")
}
actionFile := dc.ActionFilename(actionID)
outputFile := dc.OutputFilename(outputID)
actionDir := filepath.Dir(actionFile)
outputDir := filepath.Dir(outputFile)
if err := os.MkdirAll(actionDir, 0755); err != nil {
return "", fmt.Errorf("failed to create action directory: %w", err)
}
if err := os.MkdirAll(outputDir, 0755); err != nil {
return "", fmt.Errorf("failed to create output directory: %w", err)
}
wrote, err := writeOutputFile(outputFile, body, size, outputID)
if err != nil {
return "", err
}
if wrote != size {
return "", fmt.Errorf("wrote %d bytes, expected %d", wrote, size)
}
ij, err := json.Marshal(indexEntry{
Version: 1,
OutputID: outputID,
Size: size,
TimeNanos: time.Now().UnixNano(),
})
if err != nil {
return "", err
}
if err := writeActionFile(dc.ActionFilename(actionID), ij); err != nil {
return "", fmt.Errorf("atomic write failed: %w", err)
}
return outputFile, nil
}

@ -1,44 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !windows
package main
import (
"bytes"
"io"
"os"
"path/filepath"
)
func writeActionFile(dest string, b []byte) error {
_, err := writeAtomic(dest, bytes.NewReader(b))
return err
}
func writeOutputFile(dest string, r io.Reader, _ int64, _ string) (int64, error) {
return writeAtomic(dest, r)
}
func writeAtomic(dest string, r io.Reader) (int64, error) {
tf, err := os.CreateTemp(filepath.Dir(dest), filepath.Base(dest)+".*")
if err != nil {
return 0, err
}
size, err := io.Copy(tf, r)
if err != nil {
tf.Close()
os.Remove(tf.Name())
return 0, err
}
if err := tf.Close(); err != nil {
os.Remove(tf.Name())
return 0, err
}
if err := os.Rename(tf.Name(), dest); err != nil {
os.Remove(tf.Name())
return 0, err
}
return size, nil
}

@ -1,102 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"crypto/sha256"
"errors"
"fmt"
"io"
"os"
)
// The functions in this file are based on go's own cache in
// cmd/go/internal/cache/cache.go, particularly putIndexEntry and copyFile.
// writeActionFile writes the indexEntry metadata for an ActionID to disk. It
// may be called for the same actionID concurrently from multiple processes,
// and the outputID for a specific actionID may change from time to time due
// to non-deterministic builds. It makes a best-effort to delete the file if
// anything goes wrong.
func writeActionFile(dest string, b []byte) (retErr error) {
f, err := os.OpenFile(dest, os.O_WRONLY|os.O_CREATE, 0o666)
if err != nil {
return err
}
defer func() {
cerr := f.Close()
if retErr != nil || cerr != nil {
retErr = errors.Join(retErr, cerr, os.Remove(dest))
}
}()
_, err = f.Write(b)
if err != nil {
return err
}
// Truncate the file only *after* writing it.
// (This should be a no-op, but truncate just in case of previous corruption.)
//
// This differs from os.WriteFile, which truncates to 0 *before* writing
// via os.O_TRUNC. Truncating only after writing ensures that a second write
// of the same content to the same file is idempotent, and does not - even
// temporarily! - undo the effect of the first write.
return f.Truncate(int64(len(b)))
}
// writeOutputFile writes content to be cached to disk. The outputID is the
// sha256 hash of the content, and each file should only be written ~once,
// assuming no sha256 hash collisions. It may be written multiple times if
// concurrent processes are both populating the same output. The file is opened
// with FILE_SHARE_READ|FILE_SHARE_WRITE, which means both processes can write
// the same contents concurrently without conflict.
//
// It makes a best effort to clean up if anything goes wrong, but the file may
// be left in an inconsistent state in the event of disk-related errors such as
// another process taking file locks, or power loss etc.
func writeOutputFile(dest string, r io.Reader, size int64, outputID string) (_ int64, retErr error) {
info, err := os.Stat(dest)
if err == nil && info.Size() == size {
// Already exists, check the hash.
if f, err := os.Open(dest); err == nil {
h := sha256.New()
io.Copy(h, f)
f.Close()
if fmt.Sprintf("%x", h.Sum(nil)) == outputID {
// Still drain the reader to ensure associated resources are released.
return io.Copy(io.Discard, r)
}
}
}
// Didn't successfully find the pre-existing file, write it.
mode := os.O_WRONLY | os.O_CREATE
if err == nil && info.Size() > size {
mode |= os.O_TRUNC // Should never happen, but self-heal.
}
f, err := os.OpenFile(dest, mode, 0644)
if err != nil {
return 0, fmt.Errorf("failed to open output file %q: %w", dest, err)
}
defer func() {
cerr := f.Close()
if retErr != nil || cerr != nil {
retErr = errors.Join(retErr, cerr, os.Remove(dest))
}
}()
// Copy file to f, but also into h to double-check hash.
h := sha256.New()
w := io.MultiWriter(f, h)
n, err := io.Copy(w, r)
if err != nil {
return 0, err
}
if fmt.Sprintf("%x", h.Sum(nil)) != outputID {
return 0, errors.New("file content changed underfoot")
}
return n, nil
}

@ -1,109 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"context"
"fmt"
"io"
"log"
"net/http"
)
type gocachedClient struct {
baseURL string // base URL of the cacher server, like "http://localhost:31364".
cl *http.Client // http.Client to use.
accessToken string // Bearer token to use in the Authorization header.
verbose bool
}
// drainAndClose reads and throws away a small bounded amount of data. This is a
// best-effort attempt to allow connection reuse; Go's HTTP/1 Transport won't
// reuse a TCP connection unless you fully consume HTTP responses.
func drainAndClose(body io.ReadCloser) {
io.CopyN(io.Discard, body, 4<<10)
body.Close()
}
func tryReadErrorMessage(res *http.Response) []byte {
msg, _ := io.ReadAll(io.LimitReader(res.Body, 4<<10))
return msg
}
func (c *gocachedClient) get(ctx context.Context, actionID string) (outputID string, resp *http.Response, err error) {
req, _ := http.NewRequestWithContext(ctx, "GET", c.baseURL+"/action/"+actionID, nil)
req.Header.Set("Want-Object", "1") // opt in to single roundtrip protocol
if c.accessToken != "" {
req.Header.Set("Authorization", "Bearer "+c.accessToken)
}
res, err := c.cl.Do(req)
if err != nil {
return "", nil, err
}
defer func() {
if resp == nil {
drainAndClose(res.Body)
}
}()
if res.StatusCode == http.StatusNotFound {
return "", nil, nil
}
if res.StatusCode != http.StatusOK {
msg := tryReadErrorMessage(res)
if c.verbose {
log.Printf("error GET /action/%s: %v, %s", actionID, res.Status, msg)
}
return "", nil, fmt.Errorf("unexpected GET /action/%s status %v", actionID, res.Status)
}
outputID = res.Header.Get("Go-Output-Id")
if outputID == "" {
return "", nil, fmt.Errorf("missing Go-Output-Id header in response")
}
if res.ContentLength == -1 {
return "", nil, fmt.Errorf("no Content-Length from server")
}
return outputID, res, nil
}
func (c *gocachedClient) put(ctx context.Context, actionID, outputID string, size int64, body io.Reader) error {
req, _ := http.NewRequestWithContext(ctx, "PUT", c.baseURL+"/"+actionID+"/"+outputID, body)
req.ContentLength = size
if c.accessToken != "" {
req.Header.Set("Authorization", "Bearer "+c.accessToken)
}
res, err := c.cl.Do(req)
if err != nil {
if c.verbose {
log.Printf("error PUT /%s/%s: %v", actionID, outputID, err)
}
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
msg := tryReadErrorMessage(res)
if c.verbose {
log.Printf("error PUT /%s/%s: %v, %s", actionID, outputID, res.Status, msg)
}
return fmt.Errorf("unexpected PUT /%s/%s status %v", actionID, outputID, res.Status)
}
return nil
}
func (c *gocachedClient) fetchStats() (string, error) {
req, _ := http.NewRequest("GET", c.baseURL+"/session/stats", nil)
req.Header.Set("Authorization", "Bearer "+c.accessToken)
resp, err := c.cl.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return string(b), nil
}

@ -121,12 +121,7 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
continue continue
} }
if !hasBasicUnderlying(ft) { if !hasBasicUnderlying(ft) {
// don't dereference if the underlying type is an interface writef("dst.%s = *src.%s.Clone()", fname, fname)
if _, isInterface := ft.Underlying().(*types.Interface); isInterface {
writef("if src.%s != nil { dst.%s = src.%s.Clone() }", fname, fname, fname)
} else {
writef("dst.%s = *src.%s.Clone()", fname, fname)
}
continue continue
} }
} }
@ -141,13 +136,13 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("if src.%s[i] == nil { dst.%s[i] = nil } else {", fname, fname) writef("if src.%s[i] == nil { dst.%s[i] = nil } else {", fname, fname)
if codegen.ContainsPointers(ptr.Elem()) { if codegen.ContainsPointers(ptr.Elem()) {
if _, isIface := ptr.Elem().Underlying().(*types.Interface); isIface { if _, isIface := ptr.Elem().Underlying().(*types.Interface); isIface {
it.Import("", "tailscale.com/types/ptr") it.Import("tailscale.com/types/ptr")
writef("\tdst.%s[i] = ptr.To((*src.%s[i]).Clone())", fname, fname) writef("\tdst.%s[i] = ptr.To((*src.%s[i]).Clone())", fname, fname)
} else { } else {
writef("\tdst.%s[i] = src.%s[i].Clone()", fname, fname) writef("\tdst.%s[i] = src.%s[i].Clone()", fname, fname)
} }
} else { } else {
it.Import("", "tailscale.com/types/ptr") it.Import("tailscale.com/types/ptr")
writef("\tdst.%s[i] = ptr.To(*src.%s[i])", fname, fname) writef("\tdst.%s[i] = ptr.To(*src.%s[i])", fname, fname)
} }
writef("}") writef("}")
@ -170,7 +165,7 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("dst.%s = src.%s.Clone()", fname, fname) writef("dst.%s = src.%s.Clone()", fname, fname)
continue continue
} }
it.Import("", "tailscale.com/types/ptr") it.Import("tailscale.com/types/ptr")
writef("if dst.%s != nil {", fname) writef("if dst.%s != nil {", fname)
if _, isIface := base.Underlying().(*types.Interface); isIface && hasPtrs { if _, isIface := base.Underlying().(*types.Interface); isIface && hasPtrs {
writef("\tdst.%s = ptr.To((*src.%s).Clone())", fname, fname) writef("\tdst.%s = ptr.To((*src.%s).Clone())", fname, fname)
@ -192,34 +187,45 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("\t\tdst.%s[k] = append([]%s{}, src.%s[k]...)", fname, n, fname) writef("\t\tdst.%s[k] = append([]%s{}, src.%s[k]...)", fname, n, fname)
writef("\t}") writef("\t}")
writef("}") writef("}")
} else if codegen.IsViewType(elem) || !codegen.ContainsPointers(elem) { } else if codegen.ContainsPointers(elem) {
// If the map values are view types (which are
// immutable and don't need cloning) or don't
// themselves contain pointers, we can just
// clone the map itself.
it.Import("", "maps")
writef("\tdst.%s = maps.Clone(src.%s)", fname, fname)
} else {
// Otherwise we need to clone each element of
// the map using our recursive helper.
writef("if dst.%s != nil {", fname) writef("if dst.%s != nil {", fname)
writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem)) writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem))
writef("\tfor k, v := range src.%s {", fname) writef("\tfor k, v := range src.%s {", fname)
// Use a recursive helper here; this handles switch elem := elem.Underlying().(type) {
// arbitrarily nested maps in addition to case *types.Pointer:
// simpler types. writef("\t\tif v == nil { dst.%s[k] = nil } else {", fname)
writeMapValueClone(mapValueCloneParams{ if base := elem.Elem().Underlying(); codegen.ContainsPointers(base) {
Buf: buf, if _, isIface := base.(*types.Interface); isIface {
It: it, it.Import("tailscale.com/types/ptr")
Elem: elem, writef("\t\t\tdst.%s[k] = ptr.To((*v).Clone())", fname)
SrcExpr: "v", } else {
DstExpr: fmt.Sprintf("dst.%s[k]", fname), writef("\t\t\tdst.%s[k] = v.Clone()", fname)
BaseIndent: "\t", }
Depth: 1, } else {
}) it.Import("tailscale.com/types/ptr")
writef("\t\t\tdst.%s[k] = ptr.To(*v)", fname)
}
writef("}")
case *types.Interface:
if cloneResultType := methodResultType(elem, "Clone"); cloneResultType != nil {
if _, isPtr := cloneResultType.(*types.Pointer); isPtr {
writef("\t\tdst.%s[k] = *(v.Clone())", fname)
} else {
writef("\t\tdst.%s[k] = v.Clone()", fname)
}
} else {
writef(`panic("%s (%v) does not have a Clone method")`, fname, elem)
}
default:
writef("\t\tdst.%s[k] = *(v.Clone())", fname)
}
writef("\t}") writef("\t}")
writef("}") writef("}")
} else {
it.Import("maps")
writef("\tdst.%s = maps.Clone(src.%s)", fname, fname)
} }
case *types.Interface: case *types.Interface:
// If ft is an interface with a "Clone() ft" method, it can be used to clone the field. // If ft is an interface with a "Clone() ft" method, it can be used to clone the field.
@ -260,99 +266,3 @@ func methodResultType(typ types.Type, method string) types.Type {
} }
return sig.Results().At(0).Type() return sig.Results().At(0).Type()
} }
type mapValueCloneParams struct {
// Buf is the buffer to write generated code to
Buf *bytes.Buffer
// It is the import tracker for managing imports.
It *codegen.ImportTracker
// Elem is the type of the map value to clone
Elem types.Type
// SrcExpr is the expression for the source value (e.g., "v", "v2", "v3")
SrcExpr string
// DstExpr is the expression for the destination (e.g., "dst.Field[k]", "dst.Field[k][k2]")
DstExpr string
// BaseIndent is the "base" indentation string for the generated code
// (i.e. 1 or more tabs). Additional indentation will be added based on
// the Depth parameter.
BaseIndent string
// Depth is the current nesting depth (1 for first level, 2 for second, etc.)
Depth int
}
// writeMapValueClone generates code to clone a map value recursively.
// It handles arbitrary nesting of maps, pointers, and interfaces.
func writeMapValueClone(params mapValueCloneParams) {
indent := params.BaseIndent + strings.Repeat("\t", params.Depth)
writef := func(format string, args ...any) {
fmt.Fprintf(params.Buf, indent+format+"\n", args...)
}
switch elem := params.Elem.Underlying().(type) {
case *types.Pointer:
writef("if %s == nil { %s = nil } else {", params.SrcExpr, params.DstExpr)
if base := elem.Elem().Underlying(); codegen.ContainsPointers(base) {
if _, isIface := base.(*types.Interface); isIface {
params.It.Import("", "tailscale.com/types/ptr")
writef("\t%s = ptr.To((*%s).Clone())", params.DstExpr, params.SrcExpr)
} else {
writef("\t%s = %s.Clone()", params.DstExpr, params.SrcExpr)
}
} else {
params.It.Import("", "tailscale.com/types/ptr")
writef("\t%s = ptr.To(*%s)", params.DstExpr, params.SrcExpr)
}
writef("}")
case *types.Map:
// Recursively handle nested maps
innerElem := elem.Elem()
if codegen.IsViewType(innerElem) || !codegen.ContainsPointers(innerElem) {
// Inner map values don't need deep cloning
params.It.Import("", "maps")
writef("%s = maps.Clone(%s)", params.DstExpr, params.SrcExpr)
} else {
// Inner map values need cloning
keyType := params.It.QualifiedName(elem.Key())
valueType := params.It.QualifiedName(innerElem)
// Generate unique variable names for nested loops based on depth
keyVar := fmt.Sprintf("k%d", params.Depth+1)
valVar := fmt.Sprintf("v%d", params.Depth+1)
writef("if %s == nil {", params.SrcExpr)
writef("\t%s = nil", params.DstExpr)
writef("\tcontinue")
writef("}")
writef("%s = map[%s]%s{}", params.DstExpr, keyType, valueType)
writef("for %s, %s := range %s {", keyVar, valVar, params.SrcExpr)
// Recursively generate cloning code for the nested map value
nestedDstExpr := fmt.Sprintf("%s[%s]", params.DstExpr, keyVar)
writeMapValueClone(mapValueCloneParams{
Buf: params.Buf,
It: params.It,
Elem: innerElem,
SrcExpr: valVar,
DstExpr: nestedDstExpr,
BaseIndent: params.BaseIndent,
Depth: params.Depth + 1,
})
writef("}")
}
case *types.Interface:
if cloneResultType := methodResultType(elem, "Clone"); cloneResultType != nil {
if _, isPtr := cloneResultType.(*types.Pointer); isPtr {
writef("%s = *(%s.Clone())", params.DstExpr, params.SrcExpr)
} else {
writef("%s = %s.Clone()", params.DstExpr, params.SrcExpr)
}
} else {
writef(`panic("map value (%%v) does not have a Clone method")`, elem)
}
default:
writef("%s = *(%s.Clone())", params.DstExpr, params.SrcExpr)
}
}

@ -1,6 +1,5 @@
// Copyright (c) Tailscale Inc & AUTHORS // Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause // SPDX-License-Identifier: BSD-3-Clause
package main package main
import ( import (
@ -59,158 +58,3 @@ func TestSliceContainer(t *testing.T) {
}) })
} }
} }
func TestInterfaceContainer(t *testing.T) {
examples := []struct {
name string
in *clonerex.InterfaceContainer
}{
{
name: "nil",
in: nil,
},
{
name: "zero",
in: &clonerex.InterfaceContainer{},
},
{
name: "with_interface",
in: &clonerex.InterfaceContainer{
Interface: &clonerex.CloneableImpl{Value: 42},
},
},
{
name: "with_nil_interface",
in: &clonerex.InterfaceContainer{
Interface: nil,
},
},
}
for _, ex := range examples {
t.Run(ex.name, func(t *testing.T) {
out := ex.in.Clone()
if !reflect.DeepEqual(ex.in, out) {
t.Errorf("Clone() = %v, want %v", out, ex.in)
}
// Verify no aliasing: modifying the clone should not affect the original
if ex.in != nil && ex.in.Interface != nil {
if impl, ok := out.Interface.(*clonerex.CloneableImpl); ok {
impl.Value = 999
if origImpl, ok := ex.in.Interface.(*clonerex.CloneableImpl); ok {
if origImpl.Value == 999 {
t.Errorf("Clone() aliased memory with original")
}
}
}
}
})
}
}
func TestMapWithPointers(t *testing.T) {
num1, num2 := 42, 100
orig := &clonerex.MapWithPointers{
Nested: map[string]*int{
"foo": &num1,
"bar": &num2,
},
WithCloneMethod: map[string]*clonerex.SliceContainer{
"container1": {Slice: []*int{&num1, &num2}},
"container2": {Slice: []*int{&num1}},
},
CloneInterface: map[string]clonerex.Cloneable{
"impl1": &clonerex.CloneableImpl{Value: 123},
"impl2": &clonerex.CloneableImpl{Value: 456},
},
}
cloned := orig.Clone()
if !reflect.DeepEqual(orig, cloned) {
t.Errorf("Clone() = %v, want %v", cloned, orig)
}
// Mutate cloned.Nested pointer values
*cloned.Nested["foo"] = 999
if *orig.Nested["foo"] == 999 {
t.Errorf("Clone() aliased memory in Nested: original was modified")
}
// Mutate cloned.WithCloneMethod slice values
*cloned.WithCloneMethod["container1"].Slice[0] = 888
if *orig.WithCloneMethod["container1"].Slice[0] == 888 {
t.Errorf("Clone() aliased memory in WithCloneMethod: original was modified")
}
// Mutate cloned.CloneInterface values
if impl, ok := cloned.CloneInterface["impl1"].(*clonerex.CloneableImpl); ok {
impl.Value = 777
if origImpl, ok := orig.CloneInterface["impl1"].(*clonerex.CloneableImpl); ok {
if origImpl.Value == 777 {
t.Errorf("Clone() aliased memory in CloneInterface: original was modified")
}
}
}
}
func TestDeeplyNestedMap(t *testing.T) {
num := 123
orig := &clonerex.DeeplyNestedMap{
ThreeLevels: map[string]map[string]map[string]int{
"a": {
"b": {"c": 1, "d": 2},
"e": {"f": 3},
},
"g": {
"h": {"i": 4},
},
},
FourLevels: map[string]map[string]map[string]map[string]*clonerex.SliceContainer{
"l1a": {
"l2a": {
"l3a": {
"l4a": {Slice: []*int{&num}},
"l4b": {Slice: []*int{&num, &num}},
},
},
},
},
}
cloned := orig.Clone()
if !reflect.DeepEqual(orig, cloned) {
t.Errorf("Clone() = %v, want %v", cloned, orig)
}
// Mutate the clone's ThreeLevels map
cloned.ThreeLevels["a"]["b"]["c"] = 777
if orig.ThreeLevels["a"]["b"]["c"] == 777 {
t.Errorf("Clone() aliased memory in ThreeLevels: original was modified")
}
// Mutate the clone's FourLevels map at the deepest pointer level
*cloned.FourLevels["l1a"]["l2a"]["l3a"]["l4a"].Slice[0] = 666
if *orig.FourLevels["l1a"]["l2a"]["l3a"]["l4a"].Slice[0] == 666 {
t.Errorf("Clone() aliased memory in FourLevels: original was modified")
}
// Add a new top-level key to the clone's FourLevels map
newNum := 999
cloned.FourLevels["l1b"] = map[string]map[string]map[string]*clonerex.SliceContainer{
"l2b": {
"l3b": {
"l4c": {Slice: []*int{&newNum}},
},
},
}
if _, exists := orig.FourLevels["l1b"]; exists {
t.Errorf("Clone() aliased FourLevels map: new top-level key appeared in original")
}
// Add a new nested key to the clone's FourLevels map
cloned.FourLevels["l1a"]["l2a"]["l3a"]["l4c"] = &clonerex.SliceContainer{Slice: []*int{&newNum}}
if _, exists := orig.FourLevels["l1a"]["l2a"]["l3a"]["l4c"]; exists {
t.Errorf("Clone() aliased FourLevels map: new nested key appeared in original")
}
}

@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS // Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause // SPDX-License-Identifier: BSD-3-Clause
//go:generate go run tailscale.com/cmd/cloner -clonefunc=true -type SliceContainer,InterfaceContainer,MapWithPointers,DeeplyNestedMap //go:generate go run tailscale.com/cmd/cloner -clonefunc=true -type SliceContainer
// Package clonerex is an example package for the cloner tool. // Package clonerex is an example package for the cloner tool.
package clonerex package clonerex
@ -9,38 +9,3 @@ package clonerex
type SliceContainer struct { type SliceContainer struct {
Slice []*int Slice []*int
} }
// Cloneable is an interface with a Clone method.
type Cloneable interface {
Clone() Cloneable
}
// CloneableImpl is a concrete type that implements Cloneable.
type CloneableImpl struct {
Value int
}
func (c *CloneableImpl) Clone() Cloneable {
if c == nil {
return nil
}
return &CloneableImpl{Value: c.Value}
}
// InterfaceContainer has a pointer to an interface field, which tests
// the special handling for interface types in the cloner.
type InterfaceContainer struct {
Interface Cloneable
}
type MapWithPointers struct {
Nested map[string]*int
WithCloneMethod map[string]*SliceContainer
CloneInterface map[string]Cloneable
}
// DeeplyNestedMap tests arbitrary depth of map nesting (3+ levels)
type DeeplyNestedMap struct {
ThreeLevels map[string]map[string]map[string]int
FourLevels map[string]map[string]map[string]map[string]*SliceContainer
}

@ -6,8 +6,6 @@
package clonerex package clonerex
import ( import (
"maps"
"tailscale.com/types/ptr" "tailscale.com/types/ptr"
) )
@ -37,133 +35,9 @@ var _SliceContainerCloneNeedsRegeneration = SliceContainer(struct {
Slice []*int Slice []*int
}{}) }{})
// Clone makes a deep copy of InterfaceContainer.
// The result aliases no memory with the original.
func (src *InterfaceContainer) Clone() *InterfaceContainer {
if src == nil {
return nil
}
dst := new(InterfaceContainer)
*dst = *src
if src.Interface != nil {
dst.Interface = src.Interface.Clone()
}
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _InterfaceContainerCloneNeedsRegeneration = InterfaceContainer(struct {
Interface Cloneable
}{})
// Clone makes a deep copy of MapWithPointers.
// The result aliases no memory with the original.
func (src *MapWithPointers) Clone() *MapWithPointers {
if src == nil {
return nil
}
dst := new(MapWithPointers)
*dst = *src
if dst.Nested != nil {
dst.Nested = map[string]*int{}
for k, v := range src.Nested {
if v == nil {
dst.Nested[k] = nil
} else {
dst.Nested[k] = ptr.To(*v)
}
}
}
if dst.WithCloneMethod != nil {
dst.WithCloneMethod = map[string]*SliceContainer{}
for k, v := range src.WithCloneMethod {
if v == nil {
dst.WithCloneMethod[k] = nil
} else {
dst.WithCloneMethod[k] = v.Clone()
}
}
}
if dst.CloneInterface != nil {
dst.CloneInterface = map[string]Cloneable{}
for k, v := range src.CloneInterface {
dst.CloneInterface[k] = v.Clone()
}
}
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _MapWithPointersCloneNeedsRegeneration = MapWithPointers(struct {
Nested map[string]*int
WithCloneMethod map[string]*SliceContainer
CloneInterface map[string]Cloneable
}{})
// Clone makes a deep copy of DeeplyNestedMap.
// The result aliases no memory with the original.
func (src *DeeplyNestedMap) Clone() *DeeplyNestedMap {
if src == nil {
return nil
}
dst := new(DeeplyNestedMap)
*dst = *src
if dst.ThreeLevels != nil {
dst.ThreeLevels = map[string]map[string]map[string]int{}
for k, v := range src.ThreeLevels {
if v == nil {
dst.ThreeLevels[k] = nil
continue
}
dst.ThreeLevels[k] = map[string]map[string]int{}
for k2, v2 := range v {
dst.ThreeLevels[k][k2] = maps.Clone(v2)
}
}
}
if dst.FourLevels != nil {
dst.FourLevels = map[string]map[string]map[string]map[string]*SliceContainer{}
for k, v := range src.FourLevels {
if v == nil {
dst.FourLevels[k] = nil
continue
}
dst.FourLevels[k] = map[string]map[string]map[string]*SliceContainer{}
for k2, v2 := range v {
if v2 == nil {
dst.FourLevels[k][k2] = nil
continue
}
dst.FourLevels[k][k2] = map[string]map[string]*SliceContainer{}
for k3, v3 := range v2 {
if v3 == nil {
dst.FourLevels[k][k2][k3] = nil
continue
}
dst.FourLevels[k][k2][k3] = map[string]*SliceContainer{}
for k4, v4 := range v3 {
if v4 == nil {
dst.FourLevels[k][k2][k3][k4] = nil
} else {
dst.FourLevels[k][k2][k3][k4] = v4.Clone()
}
}
}
}
}
}
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _DeeplyNestedMapCloneNeedsRegeneration = DeeplyNestedMap(struct {
ThreeLevels map[string]map[string]map[string]int
FourLevels map[string]map[string]map[string]map[string]*SliceContainer
}{})
// Clone duplicates src into dst and reports whether it succeeded. // Clone duplicates src into dst and reports whether it succeeded.
// To succeed, <src, dst> must be of types <*T, *T> or <*T, **T>, // To succeed, <src, dst> must be of types <*T, *T> or <*T, **T>,
// where T is one of SliceContainer,InterfaceContainer,MapWithPointers,DeeplyNestedMap. // where T is one of SliceContainer.
func Clone(dst, src any) bool { func Clone(dst, src any) bool {
switch src := src.(type) { switch src := src.(type) {
case *SliceContainer: case *SliceContainer:
@ -175,33 +49,6 @@ func Clone(dst, src any) bool {
*dst = src.Clone() *dst = src.Clone()
return true return true
} }
case *InterfaceContainer:
switch dst := dst.(type) {
case *InterfaceContainer:
*dst = *src.Clone()
return true
case **InterfaceContainer:
*dst = src.Clone()
return true
}
case *MapWithPointers:
switch dst := dst.(type) {
case *MapWithPointers:
*dst = *src.Clone()
return true
case **MapWithPointers:
*dst = src.Clone()
return true
}
case *DeeplyNestedMap:
switch dst := dst.(type) {
case *DeeplyNestedMap:
*dst = *src.Clone()
return true
case **DeeplyNestedMap:
*dst = src.Clone()
return true
}
} }
return false return false
} }

@ -0,0 +1,50 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"log"
"net/http"
"sync"
)
// healthz is a simple health check server, if enabled it returns 200 OK if
// this tailscale node currently has at least one tailnet IP address else
// returns 503.
type healthz struct {
sync.Mutex
hasAddrs bool
}
func (h *healthz) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h.Lock()
defer h.Unlock()
if h.hasAddrs {
w.Write([]byte("ok"))
} else {
http.Error(w, "node currently has no tailscale IPs", http.StatusServiceUnavailable)
}
}
func (h *healthz) update(healthy bool) {
h.Lock()
defer h.Unlock()
if h.hasAddrs != healthy {
log.Println("Setting healthy", healthy)
}
h.hasAddrs = healthy
}
// healthHandlers registers a simple health handler at /healthz.
// A containerized tailscale instance is considered healthy if
// it has at least one tailnet IP address.
func healthHandlers(mux *http.ServeMux) *healthz {
h := &healthz{}
mux.Handle("GET /healthz", h)
return h
}

@ -1,331 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net/netip"
"os"
"path/filepath"
"reflect"
"time"
"github.com/fsnotify/fsnotify"
"tailscale.com/kube/ingressservices"
"tailscale.com/kube/kubeclient"
"tailscale.com/util/linuxfw"
"tailscale.com/util/mak"
)
// ingressProxy corresponds to a Kubernetes Operator's network layer ingress
// proxy. It configures firewall rules (iptables or nftables) to proxy tailnet
// traffic to Kubernetes Services. Currently this is only used for network
// layer proxies in HA mode.
type ingressProxy struct {
cfgPath string // path to ingress configfile.
// nfr is the netfilter runner used to configure firewall rules.
// This is going to be either iptables or nftables based runner.
// Never nil.
nfr linuxfw.NetfilterRunner
kc kubeclient.Client // never nil
stateSecret string // Secret that holds Tailscale state
// Pod's IP addresses are used as an identifier of this particular Pod.
podIPv4 string // empty if Pod does not have IPv4 address
podIPv6 string // empty if Pod does not have IPv6 address
}
// run starts the ingress proxy and ensures that firewall rules are set on start
// and refreshed as ingress config changes.
func (p *ingressProxy) run(ctx context.Context, opts ingressProxyOpts) error {
log.Printf("starting ingress proxy...")
p.configure(opts)
var tickChan <-chan time.Time
var eventChan <-chan fsnotify.Event
if w, err := fsnotify.NewWatcher(); err != nil {
log.Printf("failed to create fsnotify watcher, timer-only mode: %v", err)
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
tickChan = ticker.C
} else {
defer w.Close()
dir := filepath.Dir(p.cfgPath)
if err := w.Add(dir); err != nil {
return fmt.Errorf("failed to add fsnotify watch for %v: %w", dir, err)
}
eventChan = w.Events
}
if err := p.sync(ctx); err != nil {
return err
}
for {
select {
case <-ctx.Done():
return nil
case <-tickChan:
log.Printf("periodic sync, ensuring firewall config is up to date...")
case <-eventChan:
log.Printf("config file change detected, ensuring firewall config is up to date...")
}
if err := p.sync(ctx); err != nil {
return fmt.Errorf("error syncing ingress service config: %w", err)
}
}
}
// sync reconciles proxy's firewall rules (iptables or nftables) on ingress config changes:
// - ensures that new firewall rules are added
// - ensures that old firewall rules are deleted
// - updates ingress proxy's status in the state Secret
func (p *ingressProxy) sync(ctx context.Context) error {
// 1. Get the desired firewall configuration
cfgs, err := p.getConfigs()
if err != nil {
return fmt.Errorf("ingress proxy: error retrieving configs: %w", err)
}
// 2. Get the recorded firewall status
status, err := p.getStatus(ctx)
if err != nil {
return fmt.Errorf("ingress proxy: error retrieving current status: %w", err)
}
// 3. Ensure that firewall configuration is up to date
if err := p.syncIngressConfigs(cfgs, status); err != nil {
return fmt.Errorf("ingress proxy: error syncing configs: %w", err)
}
var existingConfigs *ingressservices.Configs
if status != nil {
existingConfigs = &status.Configs
}
// 4. Update the recorded firewall status
if !(ingressServicesStatusIsEqual(cfgs, existingConfigs) && p.isCurrentStatus(status)) {
if err := p.recordStatus(ctx, cfgs); err != nil {
return fmt.Errorf("ingress proxy: error setting status: %w", err)
}
}
return nil
}
// getConfigs returns the desired ingress service configuration from the mounted
// configfile.
func (p *ingressProxy) getConfigs() (*ingressservices.Configs, error) {
j, err := os.ReadFile(p.cfgPath)
if os.IsNotExist(err) {
return nil, nil
}
if err != nil {
return nil, err
}
if len(j) == 0 || string(j) == "" {
return nil, nil
}
cfg := &ingressservices.Configs{}
if err := json.Unmarshal(j, &cfg); err != nil {
return nil, err
}
return cfg, nil
}
// getStatus gets the recorded status of the configured firewall. The status is
// stored in the proxy's state Secret. Note that the recorded status might not
// be the current status of the firewall if it belongs to a previous Pod- we
// take that into account further down the line when determining if the desired
// rules are actually present.
func (p *ingressProxy) getStatus(ctx context.Context) (*ingressservices.Status, error) {
secret, err := p.kc.GetSecret(ctx, p.stateSecret)
if err != nil {
return nil, fmt.Errorf("error retrieving state Secret: %w", err)
}
status := &ingressservices.Status{}
raw, ok := secret.Data[ingressservices.IngressConfigKey]
if !ok {
return nil, nil
}
if err := json.Unmarshal([]byte(raw), status); err != nil {
return nil, fmt.Errorf("error unmarshalling previous config: %w", err)
}
return status, nil
}
// syncIngressConfigs takes the desired firewall configuration and the recorded
// status and ensures that any missing rules are added and no longer needed
// rules are deleted.
func (p *ingressProxy) syncIngressConfigs(cfgs *ingressservices.Configs, status *ingressservices.Status) error {
rulesToAdd := p.getRulesToAdd(cfgs, status)
rulesToDelete := p.getRulesToDelete(cfgs, status)
if err := ensureIngressRulesDeleted(rulesToDelete, p.nfr); err != nil {
return fmt.Errorf("error deleting ingress rules: %w", err)
}
if err := ensureIngressRulesAdded(rulesToAdd, p.nfr); err != nil {
return fmt.Errorf("error adding ingress rules: %w", err)
}
return nil
}
// recordStatus writes the configured firewall status to the proxy's state
// Secret. This allows the Kubernetes Operator to determine whether this proxy
// Pod has setup firewall rules to route traffic for an ingress service.
func (p *ingressProxy) recordStatus(ctx context.Context, newCfg *ingressservices.Configs) error {
status := &ingressservices.Status{}
if newCfg != nil {
status.Configs = *newCfg
}
// Pod IPs are used to determine if recorded status applies to THIS proxy Pod.
status.PodIPv4 = p.podIPv4
status.PodIPv6 = p.podIPv6
secret, err := p.kc.GetSecret(ctx, p.stateSecret)
if err != nil {
return fmt.Errorf("error retrieving state Secret: %w", err)
}
bs, err := json.Marshal(status)
if err != nil {
return fmt.Errorf("error marshalling status: %w", err)
}
secret.Data[ingressservices.IngressConfigKey] = bs
patch := kubeclient.JSONPatch{
Op: "replace",
Path: fmt.Sprintf("/data/%s", ingressservices.IngressConfigKey),
Value: bs,
}
if err := p.kc.JSONPatchResource(ctx, p.stateSecret, kubeclient.TypeSecrets, []kubeclient.JSONPatch{patch}); err != nil {
return fmt.Errorf("error patching state Secret: %w", err)
}
return nil
}
// getRulesToAdd takes the desired firewall configuration and the recorded
// firewall status and returns a map of missing Tailscale Services and rules.
func (p *ingressProxy) getRulesToAdd(cfgs *ingressservices.Configs, status *ingressservices.Status) map[string]ingressservices.Config {
if cfgs == nil {
return nil
}
var rulesToAdd map[string]ingressservices.Config
for tsSvc, wantsCfg := range *cfgs {
if status == nil || !p.isCurrentStatus(status) {
mak.Set(&rulesToAdd, tsSvc, wantsCfg)
continue
}
gotCfg := status.Configs.GetConfig(tsSvc)
if gotCfg == nil || !reflect.DeepEqual(wantsCfg, *gotCfg) {
mak.Set(&rulesToAdd, tsSvc, wantsCfg)
}
}
return rulesToAdd
}
// getRulesToDelete takes the desired firewall configuration and the recorded
// status and returns a map of Tailscale Services and rules that need to be deleted.
func (p *ingressProxy) getRulesToDelete(cfgs *ingressservices.Configs, status *ingressservices.Status) map[string]ingressservices.Config {
if status == nil || !p.isCurrentStatus(status) {
return nil
}
var rulesToDelete map[string]ingressservices.Config
for tsSvc, gotCfg := range status.Configs {
if cfgs == nil {
mak.Set(&rulesToDelete, tsSvc, gotCfg)
continue
}
wantsCfg := cfgs.GetConfig(tsSvc)
if wantsCfg != nil && reflect.DeepEqual(*wantsCfg, gotCfg) {
continue
}
mak.Set(&rulesToDelete, tsSvc, gotCfg)
}
return rulesToDelete
}
// ensureIngressRulesAdded takes a map of Tailscale Services and rules and ensures that the firewall rules are added.
func ensureIngressRulesAdded(cfgs map[string]ingressservices.Config, nfr linuxfw.NetfilterRunner) error {
for serviceName, cfg := range cfgs {
if cfg.IPv4Mapping != nil {
if err := addDNATRuleForSvc(nfr, serviceName, cfg.IPv4Mapping.TailscaleServiceIP, cfg.IPv4Mapping.ClusterIP); err != nil {
return fmt.Errorf("error adding ingress rule for %s: %w", serviceName, err)
}
}
if cfg.IPv6Mapping != nil {
if err := addDNATRuleForSvc(nfr, serviceName, cfg.IPv6Mapping.TailscaleServiceIP, cfg.IPv6Mapping.ClusterIP); err != nil {
return fmt.Errorf("error adding ingress rule for %s: %w", serviceName, err)
}
}
}
return nil
}
func addDNATRuleForSvc(nfr linuxfw.NetfilterRunner, serviceName string, tsIP, clusterIP netip.Addr) error {
log.Printf("adding DNAT rule for Tailscale Service %s with IP %s to Kubernetes Service IP %s", serviceName, tsIP, clusterIP)
return nfr.EnsureDNATRuleForSvc(serviceName, tsIP, clusterIP)
}
// ensureIngressRulesDeleted takes a map of Tailscale Services and rules and ensures that the firewall rules are deleted.
func ensureIngressRulesDeleted(cfgs map[string]ingressservices.Config, nfr linuxfw.NetfilterRunner) error {
for serviceName, cfg := range cfgs {
if cfg.IPv4Mapping != nil {
if err := deleteDNATRuleForSvc(nfr, serviceName, cfg.IPv4Mapping.TailscaleServiceIP, cfg.IPv4Mapping.ClusterIP); err != nil {
return fmt.Errorf("error deleting ingress rule for %s: %w", serviceName, err)
}
}
if cfg.IPv6Mapping != nil {
if err := deleteDNATRuleForSvc(nfr, serviceName, cfg.IPv6Mapping.TailscaleServiceIP, cfg.IPv6Mapping.ClusterIP); err != nil {
return fmt.Errorf("error deleting ingress rule for %s: %w", serviceName, err)
}
}
}
return nil
}
func deleteDNATRuleForSvc(nfr linuxfw.NetfilterRunner, serviceName string, tsIP, clusterIP netip.Addr) error {
log.Printf("deleting DNAT rule for Tailscale Service %s with IP %s to Kubernetes Service IP %s", serviceName, tsIP, clusterIP)
return nfr.DeleteDNATRuleForSvc(serviceName, tsIP, clusterIP)
}
// isCurrentStatus returns true if the status of an ingress proxy as read from
// the proxy's state Secret is the status of the current proxy Pod. We use
// Pod's IP addresses to determine that the status is for this Pod.
func (p *ingressProxy) isCurrentStatus(status *ingressservices.Status) bool {
if status == nil {
return true
}
return status.PodIPv4 == p.podIPv4 && status.PodIPv6 == p.podIPv6
}
type ingressProxyOpts struct {
cfgPath string
nfr linuxfw.NetfilterRunner // never nil
kc kubeclient.Client // never nil
stateSecret string
podIPv4 string
podIPv6 string
}
// configure sets the ingress proxy's configuration. It is called once on start
// so we don't care about concurrent access to fields.
func (p *ingressProxy) configure(opts ingressProxyOpts) {
p.cfgPath = opts.cfgPath
p.nfr = opts.nfr
p.kc = opts.kc
p.stateSecret = opts.stateSecret
p.podIPv4 = opts.podIPv4
p.podIPv6 = opts.podIPv6
}
func ingressServicesStatusIsEqual(st, st1 *ingressservices.Configs) bool {
if st == nil && st1 == nil {
return true
}
if st == nil || st1 == nil {
return false
}
return reflect.DeepEqual(*st, *st1)
}

@ -1,223 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"net/netip"
"testing"
"tailscale.com/kube/ingressservices"
"tailscale.com/util/linuxfw"
)
func TestSyncIngressConfigs(t *testing.T) {
tests := []struct {
name string
currentConfigs *ingressservices.Configs
currentStatus *ingressservices.Status
wantServices map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}
}{
{
name: "add_new_rules_when_no_existing_config",
currentConfigs: &ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.1", "", ""),
},
currentStatus: nil,
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:foo": makeWantService("100.64.0.1", "10.0.0.1"),
},
},
{
name: "add_multiple_services",
currentConfigs: &ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.1", "", ""),
"svc:bar": makeServiceConfig("100.64.0.2", "10.0.0.2", "", ""),
"svc:baz": makeServiceConfig("100.64.0.3", "10.0.0.3", "", ""),
},
currentStatus: nil,
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:foo": makeWantService("100.64.0.1", "10.0.0.1"),
"svc:bar": makeWantService("100.64.0.2", "10.0.0.2"),
"svc:baz": makeWantService("100.64.0.3", "10.0.0.3"),
},
},
{
name: "add_both_ipv4_and_ipv6_rules",
currentConfigs: &ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.1", "2001:db8::1", "2001:db8::2"),
},
currentStatus: nil,
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:foo": makeWantService("2001:db8::1", "2001:db8::2"),
},
},
{
name: "add_ipv6_only_rules",
currentConfigs: &ingressservices.Configs{
"svc:ipv6": makeServiceConfig("", "", "2001:db8::10", "2001:db8::20"),
},
currentStatus: nil,
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:ipv6": makeWantService("2001:db8::10", "2001:db8::20"),
},
},
{
name: "delete_all_rules_when_config_removed",
currentConfigs: nil,
currentStatus: &ingressservices.Status{
Configs: ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.1", "", ""),
"svc:bar": makeServiceConfig("100.64.0.2", "10.0.0.2", "", ""),
},
PodIPv4: "10.0.0.2", // Current pod IPv4
PodIPv6: "2001:db8::2", // Current pod IPv6
},
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{},
},
{
name: "add_remove_modify",
currentConfigs: &ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.2", "", ""), // Changed cluster IP
"svc:new": makeServiceConfig("100.64.0.4", "10.0.0.4", "", ""),
},
currentStatus: &ingressservices.Status{
Configs: ingressservices.Configs{
"svc:foo": makeServiceConfig("100.64.0.1", "10.0.0.1", "", ""),
"svc:bar": makeServiceConfig("100.64.0.2", "10.0.0.2", "", ""),
"svc:baz": makeServiceConfig("100.64.0.3", "10.0.0.3", "", ""),
},
PodIPv4: "10.0.0.2", // Current pod IPv4
PodIPv6: "2001:db8::2", // Current pod IPv6
},
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:foo": makeWantService("100.64.0.1", "10.0.0.2"),
"svc:new": makeWantService("100.64.0.4", "10.0.0.4"),
},
},
{
name: "update_with_outdated_status",
currentConfigs: &ingressservices.Configs{
"svc:web": makeServiceConfig("100.64.0.10", "10.0.0.10", "", ""),
"svc:web-ipv6": {
IPv6Mapping: &ingressservices.Mapping{
TailscaleServiceIP: netip.MustParseAddr("2001:db8::10"),
ClusterIP: netip.MustParseAddr("2001:db8::20"),
},
},
"svc:api": makeServiceConfig("100.64.0.20", "10.0.0.20", "", ""),
},
currentStatus: &ingressservices.Status{
Configs: ingressservices.Configs{
"svc:web": makeServiceConfig("100.64.0.10", "10.0.0.10", "", ""),
"svc:web-ipv6": {
IPv6Mapping: &ingressservices.Mapping{
TailscaleServiceIP: netip.MustParseAddr("2001:db8::10"),
ClusterIP: netip.MustParseAddr("2001:db8::20"),
},
},
"svc:old": makeServiceConfig("100.64.0.30", "10.0.0.30", "", ""),
},
PodIPv4: "10.0.0.1", // Outdated pod IP
PodIPv6: "2001:db8::1", // Outdated pod IP
},
wantServices: map[string]struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
"svc:web": makeWantService("100.64.0.10", "10.0.0.10"),
"svc:web-ipv6": makeWantService("2001:db8::10", "2001:db8::20"),
"svc:api": makeWantService("100.64.0.20", "10.0.0.20"),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var nfr linuxfw.NetfilterRunner = linuxfw.NewFakeNetfilterRunner()
ep := &ingressProxy{
nfr: nfr,
podIPv4: "10.0.0.2", // Current pod IPv4
podIPv6: "2001:db8::2", // Current pod IPv6
}
err := ep.syncIngressConfigs(tt.currentConfigs, tt.currentStatus)
if err != nil {
t.Fatalf("syncIngressConfigs failed: %v", err)
}
fake := nfr.(*linuxfw.FakeNetfilterRunner)
gotServices := fake.GetServiceState()
if len(gotServices) != len(tt.wantServices) {
t.Errorf("got %d services, want %d", len(gotServices), len(tt.wantServices))
}
for svc, want := range tt.wantServices {
got, ok := gotServices[svc]
if !ok {
t.Errorf("service %s not found", svc)
continue
}
if got.TailscaleServiceIP != want.TailscaleServiceIP {
t.Errorf("service %s: got TailscaleServiceIP %v, want %v", svc, got.TailscaleServiceIP, want.TailscaleServiceIP)
}
if got.ClusterIP != want.ClusterIP {
t.Errorf("service %s: got ClusterIP %v, want %v", svc, got.ClusterIP, want.ClusterIP)
}
}
})
}
}
func makeServiceConfig(tsIP, clusterIP string, tsIP6, clusterIP6 string) ingressservices.Config {
cfg := ingressservices.Config{}
if tsIP != "" && clusterIP != "" {
cfg.IPv4Mapping = &ingressservices.Mapping{
TailscaleServiceIP: netip.MustParseAddr(tsIP),
ClusterIP: netip.MustParseAddr(clusterIP),
}
}
if tsIP6 != "" && clusterIP6 != "" {
cfg.IPv6Mapping = &ingressservices.Mapping{
TailscaleServiceIP: netip.MustParseAddr(tsIP6),
ClusterIP: netip.MustParseAddr(clusterIP6),
}
}
return cfg
}
func makeWantService(tsIP, clusterIP string) struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
} {
return struct {
TailscaleServiceIP netip.Addr
ClusterIP netip.Addr
}{
TailscaleServiceIP: netip.MustParseAddr(tsIP),
ClusterIP: netip.MustParseAddr(clusterIP),
}
}

@ -8,25 +8,15 @@ package main
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"log"
"net/http" "net/http"
"net/netip" "net/netip"
"os" "os"
"strings"
"time"
"tailscale.com/ipn"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/ingressservices"
"tailscale.com/kube/kubeapi" "tailscale.com/kube/kubeapi"
"tailscale.com/kube/kubeclient" "tailscale.com/kube/kubeclient"
"tailscale.com/kube/kubetypes" "tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/util/backoff"
"tailscale.com/util/set"
) )
// kubeClient is a wrapper around Tailscale's internal kube client that knows how to talk to the kube API server. We use // kubeClient is a wrapper around Tailscale's internal kube client that knows how to talk to the kube API server. We use
@ -120,100 +110,19 @@ func (kc *kubeClient) deleteAuthKey(ctx context.Context) error {
return nil return nil
} }
// resetContainerbootState resets state from previous runs of containerboot to // storeCapVerUID stores the current capability version of tailscale and, if provided, UID of the Pod in the tailscale
// ensure the operator doesn't use stale state when a Pod is first recreated. // state Secret.
func (kc *kubeClient) resetContainerbootState(ctx context.Context, podUID string) error { // These two fields are used by the Kubernetes Operator to observe the current capability version of tailscaled running in this container.
existingSecret, err := kc.GetSecret(ctx, kc.stateSecret) func (kc *kubeClient) storeCapVerUID(ctx context.Context, podUID string) error {
switch { capVerS := fmt.Sprintf("%d", tailcfg.CurrentCapabilityVersion)
case kubeclient.IsNotFoundErr(err): d := map[string][]byte{
// In the case that the Secret doesn't exist, we don't have any state to reset and can return early. kubetypes.KeyCapVer: []byte(capVerS),
return nil
case err != nil:
return fmt.Errorf("failed to read state Secret %q to reset state: %w", kc.stateSecret, err)
}
s := &kubeapi.Secret{
Data: map[string][]byte{
kubetypes.KeyCapVer: fmt.Appendf(nil, "%d", tailcfg.CurrentCapabilityVersion),
},
} }
if podUID != "" { if podUID != "" {
s.Data[kubetypes.KeyPodUID] = []byte(podUID) d[kubetypes.KeyPodUID] = []byte(podUID)
} }
s := &kubeapi.Secret{
toClear := set.SetOf([]string{ Data: d,
kubetypes.KeyDeviceID,
kubetypes.KeyDeviceFQDN,
kubetypes.KeyDeviceIPs,
kubetypes.KeyHTTPSEndpoint,
egressservices.KeyEgressServices,
ingressservices.IngressConfigKey,
})
for key := range existingSecret.Data {
if toClear.Contains(key) {
// It's fine to leave the key in place as a debugging breadcrumb,
// it should get a new value soon.
s.Data[key] = nil
}
} }
return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container") return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container")
} }
// waitForConsistentState waits for tailscaled to finish writing state if it
// looks like it's started. It is designed to reduce the likelihood that
// tailscaled gets shut down in the window between authenticating to control
// and finishing writing state. However, it's not bullet proof because we can't
// atomically authenticate and write state.
func (kc *kubeClient) waitForConsistentState(ctx context.Context) error {
var logged bool
bo := backoff.NewBackoff("", logger.Discard, 2*time.Second)
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
secret, err := kc.GetSecret(ctx, kc.stateSecret)
if ctx.Err() != nil || kubeclient.IsNotFoundErr(err) {
return nil
}
if err != nil {
return fmt.Errorf("getting Secret %q: %v", kc.stateSecret, err)
}
if hasConsistentState(secret.Data) {
return nil
}
if !logged {
log.Printf("Waiting for tailscaled to finish writing state to Secret %q", kc.stateSecret)
logged = true
}
bo.BackOff(ctx, errors.New("")) // Fake error to trigger actual sleep.
}
}
// hasConsistentState returns true is there is either no state or the full set
// of expected keys are present.
func hasConsistentState(d map[string][]byte) bool {
var (
_, hasCurrent = d[string(ipn.CurrentProfileStateKey)]
_, hasKnown = d[string(ipn.KnownProfilesStateKey)]
_, hasMachine = d[string(ipn.MachineKeyStateKey)]
hasProfile bool
)
for k := range d {
if strings.HasPrefix(k, "profile-") {
if hasProfile {
return false // We only expect one profile.
}
hasProfile = true
}
}
// Approximate check, we don't want to reimplement all of profileManager.
return (hasCurrent && hasKnown && hasMachine && hasProfile) ||
(!hasCurrent && !hasKnown && !hasMachine && !hasProfile)
}

@ -8,18 +8,11 @@ package main
import ( import (
"context" "context"
"errors" "errors"
"fmt"
"testing" "testing"
"time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"tailscale.com/ipn"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/ingressservices"
"tailscale.com/kube/kubeapi" "tailscale.com/kube/kubeapi"
"tailscale.com/kube/kubeclient" "tailscale.com/kube/kubeclient"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
) )
func TestSetupKube(t *testing.T) { func TestSetupKube(t *testing.T) {
@ -212,109 +205,3 @@ func TestSetupKube(t *testing.T) {
}) })
} }
} }
func TestWaitForConsistentState(t *testing.T) {
data := map[string][]byte{
// Missing _current-profile.
string(ipn.KnownProfilesStateKey): []byte(""),
string(ipn.MachineKeyStateKey): []byte(""),
"profile-foo": []byte(""),
}
kc := &kubeClient{
Client: &kubeclient.FakeClient{
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return &kubeapi.Secret{
Data: data,
}, nil
},
},
}
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if err := kc.waitForConsistentState(ctx); err != context.DeadlineExceeded {
t.Fatalf("expected DeadlineExceeded, got %v", err)
}
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
data[string(ipn.CurrentProfileStateKey)] = []byte("")
if err := kc.waitForConsistentState(ctx); err != nil {
t.Fatalf("expected nil, got %v", err)
}
}
func TestResetContainerbootState(t *testing.T) {
capver := fmt.Appendf(nil, "%d", tailcfg.CurrentCapabilityVersion)
for name, tc := range map[string]struct {
podUID string
initial map[string][]byte
expected map[string][]byte
}{
"empty_initial": {
podUID: "1234",
initial: map[string][]byte{},
expected: map[string][]byte{
kubetypes.KeyCapVer: capver,
kubetypes.KeyPodUID: []byte("1234"),
},
},
"empty_initial_no_pod_uid": {
initial: map[string][]byte{},
expected: map[string][]byte{
kubetypes.KeyCapVer: capver,
},
},
"only_relevant_keys_updated": {
podUID: "1234",
initial: map[string][]byte{
kubetypes.KeyCapVer: []byte("1"),
kubetypes.KeyPodUID: []byte("5678"),
kubetypes.KeyDeviceID: []byte("device-id"),
kubetypes.KeyDeviceFQDN: []byte("device-fqdn"),
kubetypes.KeyDeviceIPs: []byte(`["192.0.2.1"]`),
kubetypes.KeyHTTPSEndpoint: []byte("https://example.com"),
egressservices.KeyEgressServices: []byte("egress-services"),
ingressservices.IngressConfigKey: []byte("ingress-config"),
"_current-profile": []byte("current-profile"),
"_machinekey": []byte("machine-key"),
"_profiles": []byte("profiles"),
"_serve_e0ce": []byte("serve-e0ce"),
"profile-e0ce": []byte("profile-e0ce"),
},
expected: map[string][]byte{
kubetypes.KeyCapVer: capver,
kubetypes.KeyPodUID: []byte("1234"),
// Cleared keys.
kubetypes.KeyDeviceID: nil,
kubetypes.KeyDeviceFQDN: nil,
kubetypes.KeyDeviceIPs: nil,
kubetypes.KeyHTTPSEndpoint: nil,
egressservices.KeyEgressServices: nil,
ingressservices.IngressConfigKey: nil,
// Tailscaled keys not included in patch.
},
},
} {
t.Run(name, func(t *testing.T) {
var actual map[string][]byte
kc := &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return &kubeapi.Secret{
Data: tc.initial,
}, nil
},
StrategicMergePatchSecretImpl: func(ctx context.Context, name string, secret *kubeapi.Secret, _ string) error {
actual = secret.Data
return nil
},
}}
if err := kc.resetContainerbootState(context.Background(), tc.podUID); err != nil {
t.Fatalf("resetContainerbootState() error = %v", err)
}
if diff := cmp.Diff(tc.expected, actual); diff != "" {
t.Errorf("resetContainerbootState() mismatch (-want +got):\n%s", diff)
}
})
}
}

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save