Commit Graph

9 Commits (dsnet/jsonimports-ci)

Author SHA1 Message Date
Joe Tsai c299a96624 all: apply consistent imports of "json" packages
This runs:

        go run ./cmd/jsonimports -update -ignore=tempfork/

which applies the following rules:

  * Until the Go standard library formally accepts "encoding/json/v2"
    and "encoding/json/jsontext" into the standard library
    (i.e., they are no longer considered experimental),
    we forbid any code from directly importing those packages.
    Go code should instead import "github.com/go-json-experiment/json"
    and "github.com/go-json-experiment/json/jsontext".
    The latter packages contain aliases to the standard library
    if built on Go 1.25 with the goexperiment.jsonv2 tag specified.

  * Imports of "encoding/json" or "github.com/go-json-experiment/json/v1"
    must be explicitly imported under the package name "jsonv1".
    If both packages need to be imported, then
    the former should be imported under the package name "jsonv1std".

  * Imports of "github.com/go-json-experiment/json"
    must be explicitly imported under the package name "jsonv2".

The latter two rules exist to provide clarity when reading code.
Without them, it is unclear whether "json.Marshal" refers to v1 or v2.
With them, however, it is clear that "jsonv1.Marshal" is calling v1 and
that "jsonv2.Marshal" is calling v2.

Updates tailscale/corp#791

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
4 weeks ago
Tom Proctor f421907c38
all-kube: create Tailscale Service for HA kube-apiserver ProxyGroup (#16572)
Adds a new reconciler for ProxyGroups of type kube-apiserver that will
provision a Tailscale Service for each replica to advertise. Adds two
new condition types to the ProxyGroup, TailscaleServiceValid and
TailscaleServiceConfigured, to post updates on the state of that
reconciler in a way that's consistent with the service-pg reconciler.
The created Tailscale Service name is configurable via a new ProxyGroup
field spec.kubeAPISserver.ServiceName, which expects a string of the
form "svc:<dns-label>".

Lots of supporting changes were needed to implement this in a way that's
consistent with other operator workflows, including:

* Pulled containerboot's ensureServicesUnadvertised and certManager into
  kube/ libraries to be shared with k8s-proxy. Use those in k8s-proxy to
  aid Service cert sharing between replicas and graceful Service shutdown.
* For certManager, add an initial wait to the cert loop to wait until
  the domain appears in the devices's netmap to avoid a guaranteed error
  on the first issue attempt when it's quick to start.
* Made several methods in ingress-for-pg.go and svc-for-pg.go into
  functions to share with the new reconciler
* Added a Resource struct to the owner refs stored in Tailscale Service
  annotations to be able to distinguish between Ingress- and ProxyGroup-
  based Services that need cleaning up in the Tailscale API.
* Added a ListVIPServices method to the internal tailscale client to aid
  cleaning up orphaned Services
* Support for reading config from a kube Secret, and partial support for
  config reloading, to prevent us having to force Pod restarts when
  config changes.
* Fixed up the zap logger so it's possible to set debug log level.

Updates #13358

Change-Id: Ia9607441157dd91fb9b6ecbc318eecbef446e116
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
5 months ago
Tom Proctor 4dfed6b146
cmd/{k8s-operator,k8s-proxy}: add kube-apiserver ProxyGroup type (#16266)
Adds a new k8s-proxy command to convert operator's in-process proxy to
a separately deployable type of ProxyGroup: kube-apiserver. k8s-proxy
reads in a new config file written by the operator, modelled on tailscaled's
conffile but with some modifications to ensure multiple versions of the
config can co-exist within a file. This should make it much easier to
support reading that config file from a Kube Secret with a stable file name.

To avoid needing to give the operator ClusterRole{,Binding} permissions,
the helm chart now optionally deploys a new static ServiceAccount for
the API Server proxy to use if in auth mode.

Proxies deployed by kube-apiserver ProxyGroups currently work the same as
the operator's in-process proxy. They do not yet leverage Tailscale Services
for presenting a single HA DNS name.

Updates #13358

Change-Id: Ib6ead69b2173c5e1929f3c13fb48a9a5362195d8
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
5 months ago
Tom Proctor 711698f5a9
cmd/{containerboot,k8s-operator}: use state Secret for checking device auth (#16328)
Previously, the operator checked the ProxyGroup status fields for
information on how many of the proxies had successfully authed. Use
their state Secrets instead as a more reliable source of truth.

containerboot has written device_fqdn and device_ips keys to the
state Secret since inception, and pod_uid since 1.78.0, so there's
no need to use the API for that data. Read it from the state Secret
for consistency. However, to ensure we don't read data from a
previous run of containerboot, make sure we reset containerboot's
state keys on startup.

One other knock-on effect of that is ProxyGroups can briefly be
marked not Ready while a Pod is restarting. Introduce a new
ProxyGroupAvailable condition to more accurately reflect
when downstream controllers can implement flows that rely on a
ProxyGroup having at least 1 proxy Pod running.

Fixes #16327

Change-Id: I026c18e9d23e87109a471a87b8e4fb6271716a66

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
5 months ago
Irbe Krumina e29e3c150f
cmd/k8s-operator: ensure that TLS resources are updated for HA Ingress (#16262)
Ensure that if the ProxyGroup for HA Ingress changes, the TLS Secret
and Role and RoleBinding that allow proxies to read/write to it are
updated.

Fixes #16259

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
6 months ago
Irbe Krumina 00a7dd180a
cmd/k8s-operator: validate Service tags, catch duplicate Tailscale Services (#16058)
Validate that any tags that users have specified via tailscale.com/tags
annotation are valid Tailscale ACL tags.
Validate that no more than one HA Tailscale Kubernetes Services in a single cluster refer
to the same Tailscale Service.

Updates tailscale/tailscale#16054
Updates tailscale/tailscale#16035

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
7 months ago
Irbe Krumina c4fb380f3c
cmd/k8s-operator: fix Tailscale Service API errors check (#16020)
Updates tailscale/tailscale#15895

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
7 months ago
Tom Meadows b5770c81c9
cmd/k8s-operator: rename VIPService -> Tailscale Service in L3 HA Service Reconciler (#16014)
Also changes wording tests for L7 HA Reconciler

Updates #15895

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
7 months ago
Tom Meadows df8d51023e
cmd/k8s-operator,kube/kubetypes,k8s-operator/apis: reconcile L3 HA Services (#15961)
This reconciler allows users to make applications highly available at L3 by
leveraging Tailscale Virtual Services. Many Kubernetes Service's
(irrespective of the cluster they reside in) can be mapped to a
Tailscale Virtual Service, allowing access to these Services at L3.

Updates #15895

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
7 months ago