Define PeerCapabilty and PeerCapMap as the new way of sending down
inter-peer capability information.
Previously, this was unstructured and you could only send down strings
which got too limiting for certain usecases. Instead add the ability
to send down raw JSON messages that are opaque to Tailscale but provide
the applications to define them however they wish.
Also update accessors to use the new values.
Updates #4217
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The util/linuxfw/iptables.go had a bunch of code that wasn't yet used
(in prep for future work) but because of its imports, ended up
initializing code deep within gvisor that panicked on init on arm64
systems not using 4KB pages.
This deletes the unused code to delete the imports and remove the
panic. We can then cherry-pick this back to the branch and restore it
later in a different way.
A new test makes sure we don't regress in the future by depending on
the panicking package in question.
Fixes#8658
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This commit adds nftable rule injection for tailscaled. If tailscaled is
started with envknob TS_DEBUG_USE_NETLINK_NFTABLES = true, the router
will use nftables to manage firewall rules.
Updates: #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
The server hasn't sent it in ages.
Updates #cleanup
Change-Id: I9695ab0f074ec6fb006e11faf3cdfc5ca049fbf8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Exclide GOARCHs including: mips, mips64, mips64le, mipsle, riscv64.
These archs are not supported by gvisor.dev/gvisor/pkg/hostarch.
Fixes: #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
This change is introducing new netfilterRunner interface and moving iptables manipulation to a lower leveled iptables runner.
For #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
ScrubbedGoroutineDump previously only returned the stacks of all
goroutines. I also want to be able to use this for only the current
goroutine's stack. Add a bool param to support both ways.
Updates tailscale/corp#5149
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In order to improve our ability to understand the state of policies and
registry settings when troubleshooting, we enumerate all values in all subkeys.
x/sys/windows does not already offer this, so we need to call RegEnumValue
directly.
For now we're just logging this during startup, however in a future PR I plan to
also trigger this code during a bugreport. I also want to log more than just
registry.
Fixes#8141
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
We have two other types of Sets here. Add the basic obvious one too.
Needed for a change elsewhere.
Updates #cleanup
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I noticed cmd/{cloner,viewer} didn't support structs with embedded
fields while working on a change in another repo. This adds support.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This adds an initial and intentionally minimal configuration for
golang-ci, fixes the issues reported, and adds a GitHub Action to check
new pull requests against this linter configuration.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8f38fbc315836a19a094d0d3e986758b9313f163
This is an exact copy of the files misc/set/set{,_test}.go from
tailscale/corp@a5415daa9c, plus the
license headers.
For use in #7877
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I712d09c6d1a180c6633abe3acf8feb59b27e2866
This makes `omitempty` actually work, and saves bytes in each map response.
Updates tailscale/corp#8020
Signed-off-by: Maisem Ali <maisem@tailscale.com>
A peer can have IsWireGuardOnly, which means it will not support DERP or
Disco, and it must have Endpoints filled in order to be usable.
In the present implementation only the first Endpoint will be used as
the bestAddr.
Updates tailscale/corp#10351
Co-authored-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Co-authored-by: James Tucker <james@tailscale.com>
Signed-off-by: James Tucker <james@tailscale.com>
Adds NewGaugeFunc and NewCounterFunc (inspired by expvar.Func) which
change the current value to be reported by a function. This allows
some client metric values to be computed on-demand during uploading (at
most every 15 seconds), instead of being continuously updated.
clientmetric uploading had a bunch of micro-optimizations for memory
access (#3331) which are not possible with this approach. However, any
performance hit from function-based metrics is contained to those metrics
only, and we expect to have very few.
Also adds a DisableDeltas() option for client metrics, so that absolute
values are always reported. This makes server-side processing of some
metrics easier to reason about.
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
I realized that a lot of the problems that we're seeing around migration and
LocalBackend state can be avoided if we drive Windows pref migration entirely
from within tailscaled. By doing it this way, tailscaled can automatically
perform the migration as soon as the connection with the client frontend is
established.
Since tailscaled is already running as LocalSystem, it already has access to
the user's local AppData directory. The profile manager already knows which
user is connected, so we simply need to resolve the user's prefs file and read
it from there.
Of course, to properly migrate this information we need to also check system
policies. I moved a bunch of policy resolution code out of the GUI and into
a new package in util/winutil/policy.
Updates #7626
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This adds the util/sysresources package, which currently only contains a
function to return the total memory size of the current system.
Then, we modify magicsock to scale the number of buffered DERP messages
based on the system's available memory, ensuring that we never use a
value lower than the previous constant of 32.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib763c877de4d0d4ee88869078e7d512f6a3a148d
In addition to checking the total hostname length, validate characters used in each DNS label and label length.
Updates https://github.com/tailscale/corp/issues/10012
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
This only adds the field, to be used in a future commit.
Updates tailscale/corp#8020
Co-authored-by: Melanie Warrick <warrick@tailscale.com>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This package handles cases where we need to truncate human-readable text to fit
a length constraint without leaving "ragged" multi-byte rune fragments at the
end of the truncated value.
Change-Id: Id972135d1880485f41b1fedfb65c2b8cc012d416
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
Now that we're using rand.Shuffle in a few locations, create a generic
shuffle function and use it instead. While we're at it, move the
interleaveSlices function to the same package for use.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I0b00920e5b3eea846b6cedc30bd34d978a049fd3
Also add some basic tests for this implementation.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I307ebb6db91d0c172657befb276b38ccb638f828
This isn't currently supported due to missing support in upstream
dependencies, and also we don't use this package anywhere right now.
Just conditionally skip this for now.
Fixes#7268
Change-Id: Ie7389c2c0816b39b410c02a7276051a4c18b6450
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This package is an initial implementation of something that can read
netfilter and iptables rules from the Linux kernel without needing to
shell out to an external utility; it speaks directly to the kernel using
syscalls and parses the data returned.
Currently this is read-only since it only knows how to parse a subset of
the available data.
Signed-off-by: Andrew Dunham <andrew@tailscale.com>
Change-Id: Iccadf5dcc081b73268d8ccf8884c24eb6a6f1ff5
Now that Go 1.20 is released, multierr.Error can implement
Unwrap() []error
Updates #7123
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic28c2579de6799801836c447afbca8cdcba732cf
Update all code generation tools, and those that check for license
headers to use the new standard header.
Also update copyright statement in LICENSE file.
Fixes#6865
Signed-off-by: Will Norris <will@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
Goal: one way for users to update Tailscale, downgrade, switch tracks,
regardless of platform (Windows, most Linux distros, macOS, Synology).
This is a start.
Updates #755, etc
Change-Id: I23466da1ba41b45f0029ca79a17f5796c2eedd92
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Nodes that are expired, taking into account the time delta calculated
from MapResponse.ControlTime have the newly-added Expired boolean set.
For additional defense-in-depth, also replicate what control does and
clear the Endpoints and DERP fields, and additionally set the node key
to a bogus value.
Updates #6932
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia2bd6b56064416feee28aef5699ca7090940662a
Consider the following pattern:
err1 := foo()
err2 := bar()
err3 := baz()
return multierr.New(err1, err2, err3)
If err1, err2, and err3 are all nil, then multierr.New should not allocate.
Thus, modify the logic of New to count the number of distinct error values
and allocate the exactly needed slice. This also speeds up non-empty error
situation since repeatedly growing with append is slow.
Performance:
name old time/op new time/op delta
Empty-24 41.8ns ± 2% 6.4ns ± 1% -84.73% (p=0.000 n=10+10)
NonEmpty-24 120ns ± 3% 69ns ± 1% -42.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Empty-24 64.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10)
NonEmpty-24 168B ± 0% 88B ± 0% -47.62% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Empty-24 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
NonEmpty-24 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Errors in Go are no longer viewed as a linear chain, but a tree.
See golang/go#53435.
Add a Range function that iterates through an error
in a pre-order, depth-first order.
This matches the iteration order of errors.As in Go 1.20.
This adds the logic (but currently commented out) for having
Error implement the multi-error version of Unwrap in Go 1.20.
It is commented out currently since it causes "go vet"
to complain about having the "wrong" signature.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I added util/winutil/LookupPseudoUser, which essentially consists of the bits
that I am in the process of adding to Go's standard library.
We check the provided SID for "S-1-5-x" where 17 <= x <= 20 (which are the
known pseudo-users) and then manually populate a os/user.User struct with
the correct information.
Fixes https://github.com/tailscale/tailscale/issues/869
Fixes https://github.com/tailscale/tailscale/issues/2894
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
We use this pattern in a number of places (in this repo and elsewhere)
and I was about to add a fourth to this repo which was crossing the line.
Add this type instead so they're all the same.
Also, we have another Set type (SliceSet, which tracks its keys in
order) in another repo we can move to this package later.
Change-Id: Ibbdcdba5443fae9b6956f63990bdb9e9443cefa9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This sets the "com.apple.quarantine" flag on macOS, and the
"Zone.Identifier" alternate data stream on Windows.
Change-Id: If14f805467b0e2963067937d7f34e08ba1d1fa85
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This is similar to the golang.org/x/tools/internal/fastwalk I'd
previously written but not recursive and using mem.RO.
The metrics package already had some Linux-specific directory reading
code in it. Move that out to a new general package that can be reused
by portlist too, which helps its scanning of all /proc files:
name old time/op new time/op delta
FindProcessNames-8 2.79ms ± 6% 2.45ms ± 7% -12.11% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
FindProcessNames-8 62.9kB ± 0% 33.5kB ± 0% -46.76% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
FindProcessNames-8 2.25k ± 0% 0.38k ± 0% -82.98% (p=0.000 n=9+10)
Change-Id: I75db393032c328f12d95c39f71c9742c375f207a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It's normal for HKLM\SOFTWARE\Policies\Tailscale to not exist but that
currently produces a lot of log spam.
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
This way we can do that once (out of band, in the GitHub action),
instead of increasing the time of each deploy that uses the package.
.wasm is removed from the list of automatically pre-compressed
extensions, an OSS bump and small change on the corp side is needed to
make use of this change.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Sync with golang.org/x/sync/singleflight at commit
8fcdb60fdcc0539c5e357b2308249e4e752147f1
Fixes#5790
Signed-off-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
I added new functions to winutil to obtain the state of a service and all
its depedencies, serialize them to JSON, and write them to a Logf.
When tstun.New returns a wrapped ERROR_DEVICE_NOT_AVAILABLE, we know that wintun
installation failed. We then log the service graph rooted at "NetSetupSvc".
We are interested in that specific service because network devices will not
install if that service is not running.
Updates https://github.com/tailscale/tailscale/issues/5531
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
* and move goroutine scrubbing code to its own package for reuse
* bump capver to 45
Change-Id: I9b4dfa5af44d2ecada6cc044cd1b5674ee427575
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We're adding two log IDs to facilitate data-plane audit logging: a node-specific
log ID, and a domain-specific log ID.
Updated util/deephash/deephash_test.go with revised expectations for tailcfg.Node.
Updates https://github.com/tailscale/corp/issues/6991
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
And put the rationale in the name too to save the callers the need for a comment.
Change-Id: I090f51b749a5a0641897ee89a8fb2e2080c8b782
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is unclear whether the lack of checking nil-ness of slices
was an oversight or a deliberate feature.
Lacking a comment, the assumption is that this was an oversight.
Also, expand the logic to perform cycle detection for recursive slices.
We do this on a per-element basis since a slice is semantically
equivalent to a list of pointers.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I was working on my "dump iptables rules using only syscalls" branch and
had a bunch of C structure decoding to do. Rather than manually
calculating the padding or using unsafe trickery to actually cast
variable-length structures to Go types, I'd rather use a helper package
that deals with padding for me.
Padding rules were taken from the following article:
http://www.catb.org/esr/structure-packing/
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Add a new lookupTypeHasher function that is just a cached front-end
around the makeTypeHasher function.
We do not need to worry about the recursive type cycle issue that
made getTypeInfo more complicated since makeTypeHasher
is not directly recursive. All calls to itself happen lazily
through a sync.Once upon first use.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The entry logic of Hash has extra complexity to make sure
we always have an addressable value on hand.
If not, we heap allocate the input.
For this reason we document that there are performance benefits
to always providing a pointer.
Rather than documenting this, just enforce it through generics.
Also, delete the unused HasherForType function.
It's an interesting use of generics, but not well tested.
We can resurrect it from code history if there's a need for it.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
There is a minor adjustment where we hash the length of the map
to be more on the cautious side.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than having two copies []fieldInfo,
just maintain one and perform merging in the same pass.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Use of reflect.Value.SetXXX panics if the provided argument was
obtained from an unexported struct field.
Instead, pass an unsafe.Pointer around and convert to a
reflect.Value when necessary (i.e., for maps and interfaces).
Converting from unsafe.Pointer to reflect.Value guarantees that
none of the read-only bits will be populated.
When running in race mode, we attach type information to the pointer
so that we can type check every pointer operation.
This also type-checks that direct memory hashing is within
the valid range of a struct value.
We add test cases that previously caused deephash to panic,
but now pass.
Performance:
name old time/op new time/op delta
Hash 14.1µs ± 1% 14.1µs ± 1% ~ (p=0.590 n=10+9)
HashPacketFilter 2.53µs ± 2% 2.44µs ± 1% -3.79% (p=0.000 n=9+10)
TailcfgNode 1.45µs ± 1% 1.43µs ± 0% -1.36% (p=0.000 n=9+9)
HashArray 318ns ± 2% 318ns ± 2% ~ (p=0.541 n=10+10)
HashMapAcyclic 32.9µs ± 1% 31.6µs ± 1% -4.16% (p=0.000 n=10+9)
There is a slight performance gain due to the use of unsafe.Pointer
over reflect.Value methods. Also, passing an unsafe.Pointer (1 word)
on the stack is cheaper than passing a reflect.Value (3 words).
Performance gains are diminishing since SHA-256 hashing now dominates the runtime.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
When built with "deephash_debug", print the set of HashXXX methods.
Example usage:
$ go test -run=GetTypeHasher/string_slice -tags=deephash_debug
U64(2)+U64(3)+S("foo")+U64(3)+S("bar")+FIN
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than separate functions to hash each kind,
just rely on the fact that these are direct memory hashable,
thus simplifying the code.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Every implementation of typeHasherFunc always returns true,
which implies that the slow path is no longer executed.
Delete it.
h.hashValueWithType(v, ti, ...) is deleted as it is equivalent to:
ti.hasher()(h, v)
h.hashValue(v, ...) is deleted as it is equivalent to:
ti := getTypeInfo(v.Type())
ti.hasher()(h, v)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.
A future commit will delete the slow path once
the fast path never falls back to the slow path.
Performance:
name old time/op new time/op delta
Hash-24 18.5µs ± 1% 14.9µs ± 2% -19.52% (p=0.000 n=10+10)
HashPacketFilter-24 2.54µs ± 1% 2.60µs ± 1% +2.19% (p=0.000 n=10+10)
HashMapAcyclic-24 31.6µs ± 1% 30.5µs ± 1% -3.42% (p=0.000 n=9+8)
TailcfgNode-24 1.44µs ± 2% 1.43µs ± 1% ~ (p=0.171 n=10+10)
HashArray-24 324ns ± 1% 324ns ± 2% ~ (p=0.425 n=9+9)
The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.
There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
There are 5 types that we care about that implement AppendTo:
key.DiscoPublic
key.NodePublic
netip.Prefix
netipx.IPRange
netip.Addr
The key types are thin wrappers around [32]byte and are memory hashable.
The netip.Prefix and netipx.IPRange types are thin wrappers over netip.Addr
and are hashable by default if netip.Addr is hashable.
The netip.Addr type is the only one with a complex structure where
the default behavior of deephash does not hash it correctly due to the presence
of the intern.Value type.
Drop support for AppendTo and instead add specialized hashing for netip.Addr
that would be semantically equivalent to == on the netip.Addr values.
The AppendTo support was already broken prior to this change.
It was fully removed (intentionally or not) in #4870.
It was partially restored in #4858 for the fast path,
but still broken in the slow path.
Just drop support for it altogether.
This does mean we lack any ability for types to self-hash themselves.
In the future we can add support for types that implement:
interface { DeepHash() Sum }
Test and fuzz cases were added for the relevant types that
used to rely on the AppendTo method.
FuzzAddr has been executed on 1 billion samples without issues.
Signed-off-by: Joe Tsai joetsai@digital-static.net
Rename Hash as Block512 to indicate that this is a general-purpose
hash.Hash for any algorithm that operates on 512-bit block sizes.
While we rename the package as hashx in this commit,
a subsequent commit will move the sha256x package to hashx.
This is done separately to avoid confusing git.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Also, rename canMemHash to typeIsMemHashable to be consistent.
There are zero changes to the semantics.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Any type that is memory hashable must not be recursive since
there are definitely no pointers involved to make a cycle.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Put the t.Size() == 0 check first since this is applicable in all cases.
Drop the last struct field conditional since this is covered by the
sumFieldSize check at the end.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.
Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Formatting a time.Time as RFC3339 is slow.
See https://go.dev/issue/54093
Now that we have efficient hashing of fixed-width integers,
just hash the time.Time as a binary value.
Performance:
Hash-24 19.0µs ± 1% 18.6µs ± 1% -2.03% (p=0.000 n=10+9)
TailcfgNode-24 1.79µs ± 1% 1.40µs ± 1% -21.74% (p=0.000 n=10+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Switch deephash to use sha256x.Hash.
We add sha256x.HashString to efficiently hash a string.
It uses unsafe under the hood to convert a string to a []byte.
We also modify sha256x.Hash to export the underlying hash.Hash
for testing purposes so that we can intercept all hash.Hash calls.
Performance:
name old time/op new time/op delta
Hash-24 19.8µs ± 1% 19.2µs ± 1% -3.01% (p=0.000 n=10+10)
HashPacketFilter-24 2.61µs ± 0% 2.53µs ± 1% -3.01% (p=0.000 n=8+10)
HashMapAcyclic-24 31.3µs ± 1% 29.8µs ± 0% -4.80% (p=0.000 n=10+9)
TailcfgNode-24 1.83µs ± 1% 1.82µs ± 2% ~ (p=0.305 n=10+10)
HashArray-24 344ns ± 2% 323ns ± 1% -6.02% (p=0.000 n=9+10)
The performance gains is not as dramatic as sha256x over sha256 due to:
1. most of the hashing already occurring through the direct memory hashing logic, and
2. what does not go through direct memory hashing is slowed down by reflect.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
In Go 1.19, the reflect.Value.MapRange method uses "function outlining"
so that the allocation of reflect.MapIter is inlinable by the caller.
If the iterator doesn't escape the caller, it can be stack allocated.
See https://go.dev/cl/400675
Performance:
name old time/op new time/op delta
HashMapAcyclic-24 31.9µs ± 2% 32.1µs ± 1% ~ (p=0.075 n=10+10)
name old alloc/op new alloc/op delta
HashMapAcyclic-24 0.00B 0.00B ~ (all equal)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The hash.Hash provided by sha256.New is much more efficient
if we always provide it with data a multiple of the block size.
This avoids double-copying of data into the internal block
of sha256.digest.x. Effectively, we are managing a block ourselves
to ensure we only ever call hash.Hash.Write with full blocks.
Performance:
name old time/op new time/op delta
Hash 33.5µs ± 1% 20.6µs ± 1% -38.40% (p=0.000 n=10+9)
The logic has gone through CPU-hours of fuzzing.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The logic of deephash is both simpler and easier to reason about
if values are always addressable.
In Go, the composite kinds are slices, arrays, maps, structs,
interfaces, pointers, channels, and functions,
where we define "composite" as a Go value that encapsulates
some other Go value (e.g., a map is a collection of key-value entries).
In the cases of pointers and slices, the sub-values are always addressable.
In the cases of arrays and structs, the sub-values are always addressable
if and only if the parent value is addressable.
In the case of maps and interfaces, the sub-values are never addressable.
To make them addressable, we need to copy them onto the heap.
For the purposes of deephash, we do not care about channels and functions.
For all non-composite kinds (e.g., strings and ints), they are only addressable
if obtained from one of the composite kinds that produce addressable values
(i.e., pointers, slices, addressable arrays, and addressable structs).
A non-addressible, non-composite kind can be made addressable by
allocating it on the heap, obtaining a pointer to it, and dereferencing it.
Thus, if we can ensure that values are addressable at the entry points,
and shallow copy sub-values whenever we encounter an interface or map,
then we can ensure that all values are always addressable and
assume such property throughout all the logic.
Performance:
name old time/op new time/op delta
Hash-24 21.5µs ± 1% 19.7µs ± 1% -8.29% (p=0.000 n=9+9)
HashPacketFilter-24 2.61µs ± 1% 2.62µs ± 0% +0.29% (p=0.037 n=10+9)
HashMapAcyclic-24 30.8µs ± 1% 30.9µs ± 1% ~ (p=0.400 n=9+10)
TailcfgNode-24 1.84µs ± 1% 1.84µs ± 2% ~ (p=0.928 n=10+10)
HashArray-24 324ns ± 2% 332ns ± 2% +2.45% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Do function assists in calling functions that must succeed.
It only interacts well with functions that return (T, err).
Signatures with more return arguments are not supported.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We have very similar code in corp, moving it to util/precompress allows
it to be reused.
Updates #5133
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Clients may have platform-specific metrics they would like uploaded
(e.g. extracted from MetricKit on iOS). Add a new local API endpoint
that allows metrics to be updated by a simple name/value JSON-encoded
struct.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
And rewrite cloud detection to try to do only zero or one metadata
discovery request for all clouds, only doing a first (or second) as
confidence increases. Work remains for Windows, but a start.
And add Cloud to tailcfg.Hostinfo, which helped with testing using
"tailcfg debug hostinfo".
Updates #4983 (Linux only)
Updates #4984
Change-Id: Ib03337089122ce0cb38c34f724ba4b4812bc614e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And remove the GCP special-casing from ipn/ipnlocal; do it only in the
forwarder for *.internal.
Fixes#4980Fixes#4981
Change-Id: I5c481e96d91f3d51d274a80fbd37c38f16dfa5cb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This does three things:
* If you're on GCP, it adds a *.internal DNS split route to the
metadata server, so we never break GCP DNS names. This lets people
have some Tailscale nodes on GCP and some not (e.g. laptops at home)
without having to add a Tailnet-wide *.internal DNS route.
If you already have such a route, though, it won't overwrite it.
* If the 100.100.100.100 DNS forwarder has nowhere to forward to,
it forwards it to the GCP metadata IP, which forwards to 8.8.8.8.
This means there are never errNoUpstreams ("upstream nameservers not set")
errors on GCP due to e.g. mangled /etc/resolv.conf (GCP default VMs
don't have systemd-resolved, so it's likely a DNS supremacy fight)
* makes the DNS fallback mechanism use the GCP metadata IP as a
fallback before our hosted HTTP-based fallbacks
I created a default GCP VM from their web wizard. It has no
systemd-resolved.
I then made its /etc/resolv.conf be empty and deleted its GCP
hostnames in /etc/hosts.
I then logged in to a tailnet with no global DNS settings.
With this, tailscaled writes /etc/resolv.conf (direct mode, as no
systemd-resolved) and sets it to 100.100.100.100, which then has
regular DNS via the metadata IP and *.internal DNS via the metadata IP
as well. If the tailnet configures explicit DNS servers, those are used
instead, except for *.internal.
This also adds a new util/cloudenv package based on version/distro
where the cloud type is only detected once. We'll likely expand it in
the future for other clouds, doing variants of this change for other
popular cloud environments.
Fixes#4911
RELNOTES=Google Cloud DNS improvements
Change-Id: I19f3c2075983669b2b2c0f29a548da8de373c7cf
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
(breaking up parts of another change)
This adds a PacketFilter hashing benchmark with an input that both
contains every possible field, but also is somewhat representative in
the shape of what real packet filters contain.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Regression from 09afb8e35b, in which the
same reflect.Value scratch value was being used as the map iterator
copy destination.
Also: make nil and empty maps hash differently, add test.
Fixes#4871
Co-authored-by: Josh Bleecher Snyder <josharian@gmail.com>
Change-Id: I67f42524bc81f694c1b7259d6682200125ea4a66
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
AFAICT this isn't documented on MSDN, but based on the issue referenced below,
NRPT rules are not working when a rule specifies > 50 domains.
This patch modifies our NRPT rule generator to split the list of domains
into chunks as necessary, and write a separate rule for each chunk.
For compatibility reasons, we continue to use the hard-coded rule ID, but
as additional rules are required, we generate new GUIDs. Those GUIDs are
stored under the Tailscale registry path so that we know which rules are ours.
I made some changes to winutils to add additional helper functions in support
of both the code and its test: I added additional registry accessors, and also
moved some token accessors from paths to util/winutil.
Fixes https://github.com/tailscale/coral/issues/63
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
I wrote this code way back at the beginning of my tenure at Tailscale when we
had concerns about needing to restore deleted machine keys from backups.
We never ended up using this functionality, and the code is now getting in the
way, so we might as well remove it.
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The prefix is a signal to tsweb to treat this as a gauge metric when
generating the Prometheus version.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
goimports is a superset of gofmt that also groups imports.
(the goimports tool also adds/removes imports as needed, but that
part is disabled here)
Change-Id: Iacf0408dfd9497f4ed3da4fa50e165359ce38498
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This reverts commit 8d6793fd70.
Reason: breaks Android build (cgo/pthreads addition)
We can try again next cycle.
Change-Id: I5e7e1730a8bf399a8acfce546a6d22e11fb835d5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Attempt to load the xt_mark kernel module when it is not present. If the
load fails, log error information.
It may be tempting to promote this failure to an error once it has been
in use for some time, so as to avoid reaching an error with the iptables
invocation, however, there are conditions under which the two stages may
disagree - this change adds more useful breadcrumbs.
Example new output from tailscaled running under my WSL2:
```
router: ensure module xt_mark: "/usr/sbin/modprobe xt_mark" failed: exit status 1; modprobe: FATAL: Module xt_mark not found in directory /lib/modules/5.10.43.3-microsoft-standard-WSL2
```
Background:
There are two places to lookup modules, one is `/proc/modules` "old",
the other is `/sys/module/` "new".
There was query_modules(2) in linux <2.6, alas, it is gone.
In a docker container in the default configuration, you would get
/proc/modules and /sys/module/ both populated. lsmod may work file,
modprobe will fail with EPERM at `finit_module()` for an unpriviliged
container.
In a priviliged container the load may *succeed*, if some conditions are
met. This condition should be avoided, but the code landing in this
change does not attempt to avoid this scenario as it is both difficult
to detect, and has a very uncertain impact.
In an nspawn container `/proc/modules` is populated, but `/sys/module`
does not exist. Modern `lsmod` versions will fail to gather most module
information, without sysfs being populated with module information.
In WSL2 modules are likely missing, as the in-use kernel typically is
not provided by the distribution filesystem, and WSL does not mount in a
module filesystem of its own. Notably the WSL2 kernel supports iptables
marks without listing the xt_mark module in /sys/module, and
/proc/modules is empty.
On a recent kernel, we can ask the capabilities system about SYS_MODULE,
that will help to disambiguate between the non-privileged container case
and just being root. On older kernels these calls may fail.
Update #4329
Signed-off-by: James Tucker <james@tailscale.com>
It makes the most sense to have all our utility functions reside in one place.
There was nothing in corp that could not reasonably live in OSS.
I also updated `StartProcessAsChild` to no longer depend on `futureexec`,
thus reducing the amount of code that needed migration. I tested this change
with `tswin` and it is working correctly.
I have a follow-up PR to remove the corresponding code from corp.
The migrated code was mostly written by @alexbrainman.
Sourced from corp revision 03e90cfcc4dd7b8bc9b25eb13a26ec3a24ae0ef9
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This patch adds new functions to be used when accessing system policies,
and revises callers to use the new functions. They first attempt the new
registry path for policies, and if that fails, attempt to fall back to the
legacy path.
We keep non-policy variants of these functions because we should be able to
retain the ability to read settings from locations that are not exposed to
sysadmins for group policy edits.
The remaining changes will be done in corp.
Updates https://github.com/tailscale/tailscale/issues/3584
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
It was broken on Windows:
Error: util\winutil\winutil_windows.go:15:7: regBase redeclared in this block
Error: D:\a\tailscale\tailscale\util\winutil\winutil_notwindows.go:7:17: previous declaration
Error: util\winutil\winutil_windows.go:29:6: getRegString redeclared in this block
Error: D:\a\tailscale\tailscale\util\winutil\winutil_notwindows.go:9:40: previous declaration
Error: util\winutil\winutil_windows.go:47:6: getRegInteger redeclared in this block
Error: D:\a\tailscale\tailscale\util\winutil\winutil_notwindows.go:11:48: previous declaration
Error: util\winutil\winutil_windows.go:77:6: isSIDValidPrincipal redeclared in this block
Error: D:\a\tailscale\tailscale\util\winutil\winutil_notwindows.go:13:38: previous declaration
Change-Id: Ib1ce4b647f5711547840c736b933a6c42bf09583
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Our current workaround made the user check too lax, thus allowing deleted
users. This patch adds a helper function to winutil that checks that the
uid's SID represents a valid Windows security principal.
Now if `lookupUserFromID` determines that the SID is invalid, we simply
propagate the error.
Updates https://github.com/tailscale/tailscale/issues/869
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
And it updates the build tag style on a couple files.
Change-Id: I84478d822c8de3f84b56fa1176c99d2ea5083237
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
These were supposed to be part of
3b541c833e but I guess I forgot to "git
add" them. Whoops.
Updates #3307
Change-Id: I8c768a61ec7102a01799e81dc502a22399b9e9f0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And annotate magicsock as a start.
And add localapi and debug handlers with the Prometheus-format
exporter.
Updates #3307
Change-Id: I47c5d535fe54424741df143d052760387248f8d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
github.com/go-multierror/multierror served us well.
But we need a few feature from it (implement Is),
and it's not worth maintaining a fork of such a small module.
Instead, I did a clean room implementation inspired by its API.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
utils/winutil/vss contains just enough COM wrapping to query the Volume Shadow Copy service for snapshots.
WalkSnapshotsForLegacyStateDir is the friendlier interface that adds awareness of our actual use case,
mapping the snapshots and locating our legacy state directory.
Updates #3011
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This helper allows us to retrieve `DWORD` and `QWORD` values from the Tailscale key in the Windows registry.
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The fully qualified name of the type is thisPkg.tname,
so write the args like that too.
Suggested-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is a package for shared utilities used in doing codegen programs.
The inaugural API is for writing gofmt'd code to a file.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Unfortunately this test fails on certain architectures.
The problem comes down to inconsistencies in the Go escape analysis
where specific variables are marked as escaping on certain architectures.
The variables escaping to the heap are unfortunately in crypto/sha256,
which makes it impossible to fixthis locally in deephash.
For now, fix the test by compensating for the allocations that
occur from calling sha256.digest.Sum.
See golang/go#48055
Fixes#2727
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The index for every struct field or slice element and
the number of fields for the struct is unncessary.
The hashing of Go values is unambiguous because every type (except maps)
encodes in a parsable manner. So long as we know the type information,
we could theoretically decode every value (except for maps).
At a high level:
* numbers are encoded as fixed-width records according to precision.
* strings (and AppendTo output) are encoded with a fixed-width length,
followed by the contents of the buffer.
* slices are prefixed by a fixed-width length, followed by the encoding
of each value. So long as we know the type of each element, we could
theoretically decode each element.
* arrays are encoded just like slices, but elide the length
since it is determined from the Go type.
* maps are encoded first with a byte indicating whether it is a cycle.
If a cycle, it is followed by a fixed-width index for the pointer,
otherwise followed by the SHA-256 hash of its contents. The encoding of maps
is not decodeable, but a SHA-256 hash is sufficient to avoid ambiguities.
* interfaces are encoded first with a byte indicating whether it is nil.
If not nil, it is followed by a fixed-width index for the type,
and then the encoding for the underlying value. Having the type be encoded
first ensures that the value could theoretically be decoded next.
* pointers are encoded first with a byte indicating whether it is
1) nil, 2) a cycle, or 3) newly seen. If a cycle, it is followed by
a fixed-width index for the pointer. If newly seen, it is followed by
the encoding for the pointed-at value.
Removing unnecessary details speeds up hashing:
name old time/op new time/op delta
Hash-8 76.0µs ± 1% 55.8µs ± 2% -26.62% (p=0.000 n=10+10)
HashMapAcyclic-8 61.9µs ± 0% 62.0µs ± 0% ~ (p=0.666 n=9+9)
TailcfgNode-8 10.2µs ± 1% 7.5µs ± 1% -26.90% (p=0.000 n=10+9)
HashArray-8 1.07µs ± 1% 0.70µs ± 1% -34.67% (p=0.000 n=10+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Instead of hashing the humanly formatted forms of a number,
hash the native machine bits of the integers themselves.
There is a small performance gain for this:
name old time/op new time/op delta
Hash-8 75.7µs ± 1% 76.0µs ± 2% ~ (p=0.315 n=10+9)
HashMapAcyclic-8 63.1µs ± 3% 61.3µs ± 1% -2.77% (p=0.000 n=10+10)
TailcfgNode-8 10.3µs ± 1% 10.2µs ± 1% -1.48% (p=0.000 n=10+10)
HashArray-8 1.07µs ± 1% 1.05µs ± 1% -1.79% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The swapping of bufio.Writer between hasher and mapHasher is subtle.
Just embed a hasher in mapHasher to avoid complexity here.
No notable change in performance:
name old time/op new time/op delta
Hash-8 76.7µs ± 1% 77.0µs ± 1% ~ (p=0.182 n=9+10)
HashMapAcyclic-8 62.4µs ± 1% 62.5µs ± 1% ~ (p=0.315 n=10+9)
TailcfgNode-8 10.3µs ± 1% 10.3µs ± 1% -0.62% (p=0.004 n=10+9)
HashArray-8 1.07µs ± 1% 1.06µs ± 1% -0.98% (p=0.001 n=8+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The previous algorithm used a map of all visited pointers.
The strength of this approach is that it quickly prunes any nodes
that we have ever visited before. The detriment of the approach
is that pruning is heavily dependent on the order that pointers
were visited. This is especially relevant for hashing a map
where map entries are visited in a non-deterministic manner,
which would cause the map hash to be non-deterministic
(which defeats the point of a hash).
This new algorithm uses a stack of all visited pointers,
similar to how github.com/google/go-cmp performs cycle detection.
When we visit a pointer, we push it onto the stack, and when
we leave a pointer, we pop it from the stack.
Before visiting a pointer, we first check whether the pointer exists
anywhere in the stack. If yes, then we prune the node.
The detriment of this approach is that we may hash a node more often
than before since we do not prune as aggressively.
The set of visited pointers up until any node is only the
path of nodes up to that node and not any other pointers
that may have been visited elsewhere. This provides us
deterministic hashing regardless of visit order.
We can now delete hashMapFallback and associated complexity,
which only exists because the previous approach was non-deterministic
in the presence of cycles.
This fixes a failure of the old algorithm where obviously different
values are treated as equal because the pruning was too aggresive.
See https://github.com/tailscale/tailscale/issues/2443#issuecomment-883653534
The new algorithm is slightly slower since it prunes less aggresively:
name old time/op new time/op delta
Hash-8 66.1µs ± 1% 68.8µs ± 1% +4.09% (p=0.000 n=19+19)
HashMapAcyclic-8 63.0µs ± 1% 62.5µs ± 1% -0.76% (p=0.000 n=18+19)
TailcfgNode-8 9.79µs ± 2% 9.88µs ± 1% +0.95% (p=0.000 n=19+17)
HashArray-8 643ns ± 1% 653ns ± 1% +1.64% (p=0.000 n=19+19)
However, a slower but more correct algorithm seems
more favorable than a faster but incorrect algorithm.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
A Go interface may hold any number of different concrete types.
Just because two underlying values hash to the same thing
does not mean the two values are identical if they have different
concrete types. As such, include the type in the hash.
Seed the hash upon first use with the current time.
This ensures that the stability of the hash is bounded within
the lifetime of one program execution.
Hopefully, this prevents future bugs where someone assumes that
this hash is stable.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The fact that Hash returns a [sha256.Size]byte leaks details about
the underlying hash implementation. This could very well be any other
hashing algorithm with a possible different block size.
Abstract this implementation detail away by declaring an opaque type
that is comparable. While we are changing the signature of UpdateHash,
rename it to just Update to reduce stutter (e.g., deephash.Update).
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
DNS names consist of labels, but outside of length limits, DNS
itself permits any content within the labels. Some records require
labels to conform to hostname limitations (which is what we implemented
before), but not all.
Fixes#2024.
Signed-off-by: David Anderson <danderson@tailscale.com>
I based my estimation of the required timeout based on locally
observed behavior. But CI machines are worse than my local machine.
16s was enough to reduce flakiness but not eliminate it. Bump it up again.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
Consolidates the node display name logic from each of the clients into
tailcfg.Node. UI clients can use these names directly, rather than computing
them independently.
* show DNS name over hostname, removing domain's common MagicDNS suffix.
only show hostname if there's no DNS name.
but still show shared devices' MagicDNS FQDN.
* remove nerdy low-level details by default: endpoints, DERP relay,
public key. They're available in JSON mode still for those who need
them.
* only show endpoint or DERP relay when it's active with the goal of
making debugging easier. (so it's easier for users to understand
what's happening) The asterisks are gone.
* remove Tx/Rx numbers by default for idle peers; only show them when
there's traffic.
* include peers' owner login names
* add CLI option to not show peers (matching --self=true, --peers= also
defaults to true)
* sort by DNS/host name, not public key
* reorder columns
Addresses #964
Still to be done:
- Figure out the correct logging lines in util/systemd
- Figure out if we need to slip the systemd.Status function anywhere
else
- Log util/systemd errors? (most of the errors are of the "you cannot do
anything about this, but it might be a bad idea to crash the program if
it errors" kind)
Assistance in getting this over the finish line would help a lot.
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: rename the nonlinux file to appease the magic
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: fix package name
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: fix review feedback from @mdlayher
Signed-off-by: Christine Dodrill <me@christine.website>
cmd/tailscale{,d}: update depaware manifests
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: use sync.Once instead of func init
Signed-off-by: Christine Dodrill <me@christine.website>
control/controlclient: minor review feedback fixes
Signed-off-by: Christine Dodrill <me@christine.website>
{control,ipn,systemd}: fix review feedback
Signed-off-by: Christine Dodrill <me@christine.website>
review feedback fixes
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: fix sprintf call
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: make staticcheck less sad
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: print IP address in connected status
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: review feedback
Signed-off-by: Christine Dodrill <me@christine.website>
final fixups
Signed-off-by: Christine Dodrill <me@christine.website>
The cornerstone API is a more memory-efficient Unmarshal.
The savings come from re-using a json.Decoder.
BenchmarkUnmarshal-8 4016418 288 ns/op 8 B/op 1 allocs/op
BenchmarkStdUnmarshal-8 4189261 283 ns/op 184 B/op 2 allocs/op
It also includes a Bytes type to reduce allocations
when unmarshalling a non-hex-encoded JSON string into a []byte.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This makes it easy to compact slices that contain duplicate elements
by sorting and then uniqing.
This is an alternative to constructing an intermediate map
and then extracting elements from it. It also provides
more control over equality than using a map key does.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>