- 28 Oct, 2016 40 commits
-
-
Austin Clements authored
gobuf.ctxt is set to nil from many places in assembly code and these assignments require write barriers with the hybrid barrier. Conveniently, in most of these places ctxt should already be nil, in which case we don't need the barrier. This commit changes these places to assert that ctxt is already nil. gogo is more complicated, since ctxt may not already be nil. For gogo, we manually perform the write barrier if ctxt is not nil. Updates #17503. Change-Id: I9d75e27c75a1b7f8b715ad112fc5d45ffa856d30 Reviewed-on: https://go-review.googlesource.com/31764Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
Currently, we perform write barriers after performing pointer writes. At the moment, it simply doesn't matter what order this happens in, as long as they appear atomic to GC. But both the hybrid barrier and ROC are going to require a pre-write write barrier. For the hybrid barrier, this is important because the barrier needs to observe both the current value of the slot and the value that will be written to it. (Alternatively, the caller could do the write and pass in the old value, but it seems easier and more useful to just swap the order of the barrier and the write.) For ROC, this is necessary because, if the pointer write is going to make the pointer reachable to some goroutine that it currently is not visible to, the garbage collector must take some special action before that pointer becomes more broadly visible. This commits swaps pointer writes around so the write barrier occurs before the pointer write. The main subtlety here is bulk memory writes. Currently, these copy to the destination first and then use the pointer bitmap of the destination to find the copied pointers and invoke the write barrier. This is necessary because the source may not have a pointer bitmap. To handle these, we pass both the source and the destination to the bulk memory barrier, which uses the pointer bitmap of the destination, but reads the pointer values from the source. Updates #17503. Change-Id: I78ecc0c5c94ee81c29019c305b3d232069294a55 Reviewed-on: https://go-review.googlesource.com/31763Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
We reject import of main packages, but we missed tests. Reject in all tests except test of that main package. We reject local (relative) imports from code with a non-local import path, but again we missed tests. Reject those too. Fixes #14811. Fixes #15795. Fixes #17475. Change-Id: I535ff26889520276a891904f54f1a85b2c40207d Reviewed-on: https://go-review.googlesource.com/31821 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I2a710f0e9b484b3dfc581d3a9a23aa13321ec267 Reviewed-on: https://go-review.googlesource.com/32316 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Cherry Zhang authored
Materialize float constant 0 from integer zero register, instead of loading from constant pool. Also fix assembling FMOV from zero register to FP register. Change-Id: Ie413dd342cedebdb95ba8cfc220e23ed2a39e885 Reviewed-on: https://go-review.googlesource.com/32250 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
-
Cherry Zhang authored
Apparently on macOS Sierra LLDB thinks /usr/lib/dyld is mapped at address 0, even if Go code starts at 0x1000, and it looks up addresses from dyld which shadows Go symbols. Move Go binary at a higher address to avoid clash. Fixes #17463. Re-enable TestLldbPython. Change-Id: I89ca6f3ee48aa6da9862bfa0c2da91477cc93255 Reviewed-on: https://go-review.googlesource.com/32185 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Austin Clements authored
Several of our current write barrier elision optimizations are invalid with the hybrid barrier. Eliding the hybrid barrier requires that *both* the current and new pointer be already shaded and, since we don't have the flow analysis to figure out anything about the slot's current value, for now we have to just disable several of these optimizations. This has a slight impact on binary size. On linux/amd64, the go tool binary increases by 0.7% and the compile binary increases by 1.5%. It also has a slight impact on performance, as one would expect. We'll win some of this back in subsequent commits. name old time/op new time/op delta BinaryTree17-12 2.38s ± 1% 2.40s ± 1% +0.82% (p=0.000 n=18+20) Fannkuch11-12 2.84s ± 1% 2.70s ± 0% -4.97% (p=0.000 n=18+18) FmtFprintfEmpty-12 44.2ns ± 1% 46.4ns ± 2% +4.89% (p=0.000 n=16+18) FmtFprintfString-12 131ns ± 0% 134ns ± 1% +2.05% (p=0.000 n=12+19) FmtFprintfInt-12 114ns ± 1% 117ns ± 1% +3.26% (p=0.000 n=19+20) FmtFprintfIntInt-12 176ns ± 1% 181ns ± 1% +3.25% (p=0.000 n=20+20) FmtFprintfPrefixedInt-12 185ns ± 1% 190ns ± 1% +2.77% (p=0.000 n=19+18) FmtFprintfFloat-12 249ns ± 1% 254ns ± 1% +1.71% (p=0.000 n=18+20) FmtManyArgs-12 747ns ± 1% 743ns ± 1% -0.58% (p=0.000 n=19+18) GobDecode-12 6.57ms ± 1% 6.61ms ± 0% +0.73% (p=0.000 n=19+20) GobEncode-12 5.58ms ± 1% 5.60ms ± 0% +0.27% (p=0.001 n=18+18) Gzip-12 223ms ± 1% 223ms ± 1% ~ (p=0.351 n=19+20) Gunzip-12 37.9ms ± 0% 37.9ms ± 1% ~ (p=0.095 n=16+20) HTTPClientServer-12 77.8µs ± 1% 78.5µs ± 1% +0.97% (p=0.000 n=19+20) JSONEncode-12 14.8ms ± 1% 14.8ms ± 1% ~ (p=0.079 n=20+19) JSONDecode-12 53.7ms ± 1% 54.2ms ± 1% +0.92% (p=0.000 n=20+19) Mandelbrot200-12 3.81ms ± 1% 3.81ms ± 0% ~ (p=0.916 n=19+18) GoParse-12 3.19ms ± 1% 3.19ms ± 1% ~ (p=0.175 n=20+19) RegexpMatchEasy0_32-12 71.9ns ± 1% 70.6ns ± 1% -1.87% (p=0.000 n=19+20) RegexpMatchEasy0_1K-12 946ns ± 0% 944ns ± 0% -0.22% (p=0.000 n=19+16) RegexpMatchEasy1_32-12 67.3ns ± 2% 66.8ns ± 1% -0.72% (p=0.008 n=20+20) RegexpMatchEasy1_1K-12 374ns ± 1% 384ns ± 1% +2.69% (p=0.000 n=18+20) RegexpMatchMedium_32-12 107ns ± 1% 107ns ± 1% ~ (p=1.000 n=20+20) RegexpMatchMedium_1K-12 34.3µs ± 1% 34.6µs ± 1% +0.90% (p=0.000 n=20+20) RegexpMatchHard_32-12 1.78µs ± 1% 1.80µs ± 1% +1.45% (p=0.000 n=20+19) RegexpMatchHard_1K-12 53.6µs ± 0% 54.5µs ± 1% +1.52% (p=0.000 n=19+18) Revcomp-12 417ms ± 5% 391ms ± 1% -6.42% (p=0.000 n=16+19) Template-12 61.1ms ± 1% 64.2ms ± 0% +5.07% (p=0.000 n=19+20) TimeParse-12 302ns ± 1% 305ns ± 1% +0.90% (p=0.000 n=18+18) TimeFormat-12 319ns ± 1% 315ns ± 1% -1.25% (p=0.000 n=18+18) [Geo mean] 54.0µs 54.3µs +0.58% name old time/op new time/op delta XGarbage-12 2.24ms ± 2% 2.28ms ± 1% +1.68% (p=0.000 n=18+17) XHTTP-12 11.4µs ± 1% 11.6µs ± 2% +1.63% (p=0.000 n=18+18) XJSON-12 11.6ms ± 0% 12.5ms ± 0% +7.84% (p=0.000 n=18+17) Updates #17503. Change-Id: I1899f8e35662971e24bf692b517dfbe2b533c00c Reviewed-on: https://go-review.googlesource.com/31572Reviewed-by: Keith Randall <khr@golang.org>
-
Austin Clements authored
As for dropg, save is writing a nil pointer that will generate a write barrier with the hybrid barrier. However, in this case, ctxt always should already be nil, so replace the write with an assertion that this is the case. At this point, we're ready to disable the write barrier elision optimizations that interfere with the hybrid barrier. Updates #17503. Change-Id: I83208e65aa33403d442401f355b2e013ab9a50e9 Reviewed-on: https://go-review.googlesource.com/31571Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently this contains no write barriers because it's writing nil pointers, but with the hybrid barrier, even these will produce write barriers. However, since these are *gs and *ms, they don't need write barriers, so we can simply eliminate them. Updates #17503. Change-Id: Ib188a60492c5cfb352814bf9b2bcb2941fb7d6c0 Reviewed-on: https://go-review.googlesource.com/31570Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
The hybrid barrier requires allocate-black, but there's one case where we don't currently allocate black: the tiny allocator. If we allocate a *new* tiny alloc block during GC, it will be allocated black, but if we allocated the current block before GC, it won't be black, and the further allocations from it won't mark it, which means we may free a reachable tiny block during sweeping. Fix this by passing over all mcaches at the beginning of mark, while the world is still stopped, and greying their tiny blocks. Updates #17503. Change-Id: I04d4df7cc2f553f8f7b1e4cb0b52e2946588111a Reviewed-on: https://go-review.googlesource.com/31456Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
The hybrid barrier requires barriers on stack-to-stack copies if either stack is grey. There are only two instances of this in the runtime: channel sends and starting a goroutine. Channel sends already use typedmemmove and hence have the necessary barriers. This commits adds barriers for the stack-to-stack copy when starting a goroutine. Updates #17503. Change-Id: Ibb55e08127ca4d021ac54be61cb96732efa5df5b Reviewed-on: https://go-review.googlesource.com/31455Reviewed-by: Rick Hudson <rlh@golang.org>
-
Michael Matloob authored
Change-Id: Ia511b0aadc87eb53e084d14cdb90ba4be958a43e Reviewed-on: https://go-review.googlesource.com/32259Reviewed-by: Austin Clements <austin@google.com>
-
Michael Matloob authored
Original Change by Daria Kolistratova <daria.kolistratova@intel.com> Added functions with suffix proto and stuff from pprof tool to translate to protobuf. Done as the profile proto is more extensible than the legacy pprof format and is pprof's preferred profile format. Large part was taken from https://github.com/google/pprof tool. Tested by hand and compared the result with translated by pprof tool, profiles are identical. Fixes #16093 Change-Id: I2751345b09a66ee2b6aa64be76cba4cd1c326aa6 Reviewed-on: https://go-review.googlesource.com/32257 Run-TryBot: Michael Matloob <matloob@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Alan Donovan <adonovan@google.com>
-
Russ Cox authored
Fixes #17342. Change-Id: I76af756d7aff464554c5564d444962a468d0eccc Reviewed-on: https://go-review.googlesource.com/32172 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Russ Cox authored
Fixes #16164. Change-Id: Ic8f51ebd8235640143913a07b70f5b41ee061fe4 Reviewed-on: https://go-review.googlesource.com/32114Reviewed-by: Quentin Smith <quentin@golang.org>
-
Russ Cox authored
Waiting 2ms for all the kicked-off goroutines to run and block seems a little optimistic. No harm done by waiting for 200ms instead. Fixes #17238. Change-Id: I827532ea2f5f1f3ed04179f8957dd2c563946ed0 Reviewed-on: https://go-review.googlesource.com/32103 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
Fixes #6639. Change-Id: Iefce87c5521504fd41843df8462cfd840c24410f Reviewed-on: https://go-review.googlesource.com/32102 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Quentin Smith authored
Fixes #17536 Change-Id: Ica8c3d696848822ac65b7931455b1fd94809bfe8 Reviewed-on: https://go-review.googlesource.com/31710Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Currently we initialize LR on a new stack by writing nil to it. But this is an initializing write since the newly allocated stack is not zeroed, so this is unsafe with the hybrid barrier. Change this is a uintptr write to avoid a bad write barrier. Updates #17503. Change-Id: I062ac352e35df7da4644c1f2a5aaab87049d1f60 Reviewed-on: https://go-review.googlesource.com/32093Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
We reuse finalizers in finblocks, which are allocated off-heap. This means they have to be zero-initialized before becoming visible to the garbage collector. We actually already do this by clearing the finalizer before returning it to the pool, but we're not careful to enforce correct memory ordering. Fix this by manipulating the finalizer count atomically so these writes synchronize properly with the garbage collector. Updates #17503. Change-Id: I7797d31df3c656c9fe654bc6da287f66a9e2037d Reviewed-on: https://go-review.googlesource.com/31454Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
runfinq allocates a stack frame on the heap for constructing the finalizer function calls and reuses it for each call. However, because the type of this frame is constantly shifting, it tells mallocgc there are no pointers in it and it acts essentially like uninitialized memory between uses. But runfinq uses pointer writes with write barriers to "initialize" this memory, which is not going to be safe with the hybrid barrier, since the hybrid barrier may see a stale pointer left in the "uninitialized" frame. Fix this by zero-initializing the argument values in the frame before writing the argument pointers. Updates #17503. Change-Id: I951c0a2be427eb9082a32d65c4410e6fdef041be Reviewed-on: https://go-review.googlesource.com/31453Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Updates #17503. Change-Id: I109d8742358ae983fdff3f3dbb7136973e81f4c3 Reviewed-on: https://go-review.googlesource.com/31452Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, zeroing generates an ssa.OpZero, which never has write barriers, even if the assignment is an OASWB. The hybrid barrier requires write barriers on zeroing, so change OASWB to generate an ssa.OpZeroWB when assigning the zero value, which turns into a typedmemclr. Updates #17503. Change-Id: Ib37ac5e39f578447dbd6b36a6a54117d5624784d Reviewed-on: https://go-review.googlesource.com/31451Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
If a slice's backing store has pointers, we need to lower clears of that slice to memclrHasPointers instead of memclrNoHeapPointers. Updates #17503. Change-Id: I20750e4bf57f7b8862f3d898bfb32d964b91d07b Reviewed-on: https://go-review.googlesource.com/31450Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
The basic structure of Part.Read should be simple: do what you can with the current buffered data, reading more as you need it. Make it that way. Working entirely in the bufio.Reader's buffer eliminates the need for an additional bytes.Buffer. This structure should be easier to extend in the future as more special cases arise. Change-Id: I83cb24a755a1767c4c037f9ece6716460c3ecd01 Reviewed-on: https://go-review.googlesource.com/32092 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Austin Clements authored
This reverts commit 7d14401b. Reason for revert: Doesn't build. Change-Id: I766179ab9225109d9232f783326e4d3843254980 Reviewed-on: https://go-review.googlesource.com/32256Reviewed-by: Russ Cox <rsc@golang.org>
-
Russ Cox authored
Let users control whether unix listener socket file is unlinked on close. Fixes #13877. Change-Id: I9d1cb47e31418d655f164d15c67e188656a67d1c Reviewed-on: https://go-review.googlesource.com/32099 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Russ Cox authored
Fixes #17131. Change-Id: I60b381687746fadce12ef18a190cbe3f435172f2 Reviewed-on: https://go-review.googlesource.com/32098 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Quentin Smith <quentin@golang.org>
-
Austin Clements authored
Since barrier-less memclr is only safe in very narrow circumstances, this commit renames memclr to avoid accidentally calling memclr on typed memory. This can cause subtle, non-deterministic bugs, so it's worth some effort to prevent. In the near term, this will also prevent bugs creeping in from any concurrent CLs that add calls to memclr; if this happens, whichever patch hits master second will fail to compile. This also adds the other new memclr variants to the compiler's builtin.go to minimize the churn on that binary blob. We'll use these in future commits. Updates #17503. Change-Id: I00eead049f5bd35ca107ea525966831f3d1ed9ca Reviewed-on: https://go-review.googlesource.com/31369Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently fixalloc does not zero memory it reuses. This is dangerous with the hybrid barrier if the type may contain heap pointers, since it may cause us to observe a dead heap pointer on reuse. It's also error-prone since it's the only allocator that doesn't zero on allocation (mallocgc of course zeroes, but so do persistentalloc and sysAlloc). It's also largely pointless: for mcache, the caller immediately memclrs the allocation; and the two specials types are tiny so there's no real cost to zeroing them. Change fixalloc to zero allocations by default. The only type we don't zero by default is mspan. This actually requires that the spsn's sweepgen survive across freeing and reallocating a span. If we were to zero it, the following race would be possible: 1. The current sweepgen is 2. Span s is on the unswept list. 2. Direct sweeping sweeps span s, finds it's all free, and releases s to the fixalloc. 3. Thread 1 allocates s from fixalloc. Suppose this zeros s, including s.sweepgen. 4. Thread 1 calls s.init, which sets s.state to _MSpanDead. 5. On thread 2, background sweeping comes across span s in allspans and cas's s.sweepgen from 0 (sg-2) to 1 (sg-1). Now it thinks it owns it for sweeping. 6. Thread 1 continues initializing s. Everything breaks. I would like to fix this because it's obviously confusing, but it's a subtle enough problem that I'm leaving it alone for now. The solution may be to skip sweepgen 0, but then we have to think about wrap-around much more carefully. Updates #17503. Change-Id: Ie08691feed3abbb06a31381b94beb0a2e36a0613 Reviewed-on: https://go-review.googlesource.com/31368Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently the zero value of an mspan is in the "in use" state. This seems like a bad idea in general. But it's going to wreak havoc when we make fixalloc zero allocations: even "freed" mspan objects are still on the allspans list and still get looked at by the garbage collector. Hence, if we leave the mspan states the way they are, allocating a span that reuses old memory will temporarily pass that span (which is visible to GC!) through the "in use" state, which can cause "unswept span" panics. Fix all of this by making the zero state "dead". Updates #17503. Change-Id: I77c7ac06e297af4b9e6258bc091c37abe102acc3 Reviewed-on: https://go-review.googlesource.com/31367Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
The hybrid barrier requires distinguishing typed and untyped memory even when zeroing because the *current* contents of the memory matters even when overwriting. This commit introduces runtime.typedmemclr and runtime.memclrHasPointers as a typed memory clearing functions parallel to runtime.typedmemmove. Currently these simply call memclr, but with the hybrid barrier we'll need to shade any pointers we're overwriting. These will provide us with the necessary hooks to do so. Updates #17503. Change-Id: I74478619f8907825898092aaa204d6e4690f27e6 Reviewed-on: https://go-review.googlesource.com/31366Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently all mcaches are flushed in a single STW root job. This takes about 5 µs per P, but since it's done sequentially it adds about 5*GOMAXPROCS µs to the STW. Fix this by parallelizing the job. Since there are exactly GOMAXPROCS mcaches to flush, this parallelizes quite nicely and brings the STW latency cost down to a constant 5 µs (assuming GOMAXPROCS actually reflects the number of CPUs). Updates #17503. Change-Id: Ibefeb1c2229975d5137c6e67fac3b6c92103742d Reviewed-on: https://go-review.googlesource.com/32033Reviewed-by: Rick Hudson <rlh@golang.org>
-
Josh Bleecher Snyder authored
ONONAME nodes generated from unresolved symbols don't need Params. They only need Names to store Iota; move Iota to Node.Xoffset. While we're here, change iota to int64 to reduce casting. Passes toolstash -cmp. name old alloc/op new alloc/op delta Template 39.9MB ± 0% 39.7MB ± 0% -0.39% (p=0.000 n=19+20) Unicode 30.9MB ± 0% 30.7MB ± 0% -0.35% (p=0.000 n=20+20) GoTypes 119MB ± 0% 118MB ± 0% -0.42% (p=0.000 n=20+20) Compiler 464MB ± 0% 461MB ± 0% -0.54% (p=0.000 n=19+20) name old allocs/op new allocs/op delta Template 386k ± 0% 383k ± 0% -0.62% (p=0.000 n=20+20) Unicode 323k ± 0% 321k ± 0% -0.49% (p=0.000 n=20+20) GoTypes 1.16M ± 0% 1.15M ± 0% -0.67% (p=0.000 n=20+20) Compiler 4.09M ± 0% 4.05M ± 0% -0.95% (p=0.000 n=20+20) Change-Id: Ib27219a0d0405def1b4dadacf64935ba12d10a94 Reviewed-on: https://go-review.googlesource.com/32237 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
unknown authored
Added functions with suffix proto and stuff from pprof tool to translate to protobuf. Done as the profile proto is more extensible than the legacy pprof format and is pprof's preferred profile format. Large part was taken from https://github.com/google/pprof tool. Tested by hand and compared the result with translated by pprof tool, profiles are identical. Fixes #16093 Change-Id: I5acdb2809cab0d16ed4694fdaa7b8ddfd68df11e Reviewed-on: https://go-review.googlesource.com/30556 Run-TryBot: Michael Matloob <matloob@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Matloob <matloob@golang.org>
-
Austin Clements authored
The current logic in gcDrain conflates non-blocking with preemptible draining for root jobs. As a result, if you do a non-blocking (but *not* preemptible) drain, like dedicated workers do, the root job drain will stop if preempted and fall through to heap marking jobs, which won't stop until it fails to get a heap marking job. This commit fixes the condition on root marking jobs so they only stop when preempted if the drain is preemptible. Coincidentally, this also fixes a nil pointer dereference if we call gcDrain with gcDrainNoBlock and without a user G, since it tries to get the preempt flag from the nil user G. This combination never happens right now, but will in the future. Change-Id: Ia910ec20a9b46237f7926969144a33b1b4a7b2f9 Reviewed-on: https://go-review.googlesource.com/32291 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This adds support to the runtime/trace test for saving traces collected by its tests to disk and a script in internal/trace that uses this to collect canned traces for the trace test suite. This can be used to add to the test suite when we introduce a new trace format version. Change-Id: Id9ac1ff312235bf02f982fdfff8a827f54035758 Reviewed-on: https://go-review.googlesource.com/32290 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Joe Tsai authored
In a large codebase within Google, there are thousands of uses of: ContainsAny|IndexAny|LastIndexAny|Trim|TrimLeft|TrimRight An analysis of their usage shows that over 97% of them only use character sets consisting of only ASCII symbols. Uses of ContainsAny|IndexAny|LastIndexAny: 6% are 1 character (e.g., "\n" or " ") 58% are 2-4 characters (e.g., "<>" or "\r\n\t ") 24% are 5-9 characters (e.g., "()[]*^$") 10% are 10+ characters (e.g., "+-=&|><!(){}[]^\"~*?:\\/ ") We optimize for ASCII sets, which are commonly used to search for "control" characters in some string. We don't optimize for the single character scenario since IndexRune or IndexByte could be used. Uses of Trim|TrimLeft|TrimRight: 71% are 1 character (e.g., "\n" or " ") 14% are 2 characters (e.g., "\r\n") 10% are 3-4 characters (e.g., " \t\r\n") 5% are 10+ characters (e.g., "0123456789abcdefABCDEF") We optimize for the single character case with a simple closured function that only checks for that character's value. We optimize for the medium and larger sets using a 16-byte bit-map representing a set of ASCII characters. The benchmarks below have the following suffix name "%d:%d" where the first number is the length of the input and the second number is the length of the charset. == bytes package == benchmark old ns/op new ns/op delta BenchmarkIndexAnyASCII/1:1-4 5.09 5.23 +2.75% BenchmarkIndexAnyASCII/1:2-4 5.81 5.85 +0.69% BenchmarkIndexAnyASCII/1:4-4 7.22 7.50 +3.88% BenchmarkIndexAnyASCII/1:8-4 11.0 11.1 +0.91% BenchmarkIndexAnyASCII/1:16-4 17.5 17.8 +1.71% BenchmarkIndexAnyASCII/16:1-4 36.0 34.0 -5.56% BenchmarkIndexAnyASCII/16:2-4 46.6 36.5 -21.67% BenchmarkIndexAnyASCII/16:4-4 78.0 40.4 -48.21% BenchmarkIndexAnyASCII/16:8-4 136 47.4 -65.15% BenchmarkIndexAnyASCII/16:16-4 254 61.5 -75.79% BenchmarkIndexAnyASCII/256:1-4 542 388 -28.41% BenchmarkIndexAnyASCII/256:2-4 705 382 -45.82% BenchmarkIndexAnyASCII/256:4-4 1089 386 -64.55% BenchmarkIndexAnyASCII/256:8-4 1994 394 -80.24% BenchmarkIndexAnyASCII/256:16-4 3843 411 -89.31% BenchmarkIndexAnyASCII/4096:1-4 8522 5873 -31.08% BenchmarkIndexAnyASCII/4096:2-4 11253 5861 -47.92% BenchmarkIndexAnyASCII/4096:4-4 17824 5883 -66.99% BenchmarkIndexAnyASCII/4096:8-4 32053 5871 -81.68% BenchmarkIndexAnyASCII/4096:16-4 60512 5888 -90.27% BenchmarkTrimASCII/1:1-4 79.5 70.8 -10.94% BenchmarkTrimASCII/1:2-4 79.0 105 +32.91% BenchmarkTrimASCII/1:4-4 79.6 109 +36.93% BenchmarkTrimASCII/1:8-4 78.8 118 +49.75% BenchmarkTrimASCII/1:16-4 80.2 132 +64.59% BenchmarkTrimASCII/16:1-4 243 116 -52.26% BenchmarkTrimASCII/16:2-4 243 171 -29.63% BenchmarkTrimASCII/16:4-4 243 176 -27.57% BenchmarkTrimASCII/16:8-4 241 184 -23.65% BenchmarkTrimASCII/16:16-4 238 199 -16.39% BenchmarkTrimASCII/256:1-4 2580 840 -67.44% BenchmarkTrimASCII/256:2-4 2603 1175 -54.86% BenchmarkTrimASCII/256:4-4 2572 1188 -53.81% BenchmarkTrimASCII/256:8-4 2550 1191 -53.29% BenchmarkTrimASCII/256:16-4 2585 1208 -53.27% BenchmarkTrimASCII/4096:1-4 39773 12181 -69.37% BenchmarkTrimASCII/4096:2-4 39946 17231 -56.86% BenchmarkTrimASCII/4096:4-4 39641 17179 -56.66% BenchmarkTrimASCII/4096:8-4 39835 17175 -56.88% BenchmarkTrimASCII/4096:16-4 40229 17215 -57.21% == strings package == benchmark old ns/op new ns/op delta BenchmarkIndexAnyASCII/1:1-4 5.94 4.97 -16.33% BenchmarkIndexAnyASCII/1:2-4 5.94 5.55 -6.57% BenchmarkIndexAnyASCII/1:4-4 7.45 7.21 -3.22% BenchmarkIndexAnyASCII/1:8-4 10.8 10.6 -1.85% BenchmarkIndexAnyASCII/1:16-4 17.4 17.2 -1.15% BenchmarkIndexAnyASCII/16:1-4 36.4 32.2 -11.54% BenchmarkIndexAnyASCII/16:2-4 49.6 34.6 -30.24% BenchmarkIndexAnyASCII/16:4-4 77.5 37.9 -51.10% BenchmarkIndexAnyASCII/16:8-4 138 45.5 -67.03% BenchmarkIndexAnyASCII/16:16-4 241 59.1 -75.48% BenchmarkIndexAnyASCII/256:1-4 509 378 -25.74% BenchmarkIndexAnyASCII/256:2-4 720 381 -47.08% BenchmarkIndexAnyASCII/256:4-4 1142 384 -66.37% BenchmarkIndexAnyASCII/256:8-4 1999 391 -80.44% BenchmarkIndexAnyASCII/256:16-4 3735 403 -89.21% BenchmarkIndexAnyASCII/4096:1-4 7973 5824 -26.95% BenchmarkIndexAnyASCII/4096:2-4 11432 5809 -49.19% BenchmarkIndexAnyASCII/4096:4-4 18327 5819 -68.25% BenchmarkIndexAnyASCII/4096:8-4 33059 5828 -82.37% BenchmarkIndexAnyASCII/4096:16-4 59703 5817 -90.26% BenchmarkTrimASCII/1:1-4 71.9 71.8 -0.14% BenchmarkTrimASCII/1:2-4 73.3 103 +40.52% BenchmarkTrimASCII/1:4-4 71.8 106 +47.63% BenchmarkTrimASCII/1:8-4 71.2 113 +58.71% BenchmarkTrimASCII/1:16-4 71.6 128 +78.77% BenchmarkTrimASCII/16:1-4 152 116 -23.68% BenchmarkTrimASCII/16:2-4 160 168 +5.00% BenchmarkTrimASCII/16:4-4 172 170 -1.16% BenchmarkTrimASCII/16:8-4 200 177 -11.50% BenchmarkTrimASCII/16:16-4 254 193 -24.02% BenchmarkTrimASCII/256:1-4 1438 864 -39.92% BenchmarkTrimASCII/256:2-4 1551 1195 -22.95% BenchmarkTrimASCII/256:4-4 1770 1200 -32.20% BenchmarkTrimASCII/256:8-4 2195 1216 -44.60% BenchmarkTrimASCII/256:16-4 3054 1224 -59.92% BenchmarkTrimASCII/4096:1-4 21726 12557 -42.20% BenchmarkTrimASCII/4096:2-4 23586 17508 -25.77% BenchmarkTrimASCII/4096:4-4 26898 17510 -34.90% BenchmarkTrimASCII/4096:8-4 33714 17595 -47.81% BenchmarkTrimASCII/4096:16-4 47429 17700 -62.68% The benchmarks added test the worst case. For IndexAny, that is when the charset matches none of the input. For Trim, it is when the charset matches all of the input. Change-Id: I970874d101a96b33528fc99b165379abe58cf6ea Reviewed-on: https://go-review.googlesource.com/31593 Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Martin Möhrmann <martisch@uos.de>
-
Russ Cox authored
The report in #17414 points out that if you have many many templates, then this is an overwhelming list and just hurts the signal-to-noise ratio of the error. Even the test of the old behavior also supports the idea that this is noise: template: empty: "empty" is an incomplete or empty template; defined templates are: "secondary" The chance that someone mistyped "secondary" as "empty" is slim at best. Similarly, the compiler does not augment an error like 'unknown variable x' by dumping the full list of all the known variables. For all these reasons, drop the list. Fixes #17414. Change-Id: I78f92d2c591df7218385fe723a4abc497913acf8 Reviewed-on: https://go-review.googlesource.com/32116 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
Fixes #16731. Change-Id: I6d393357973d008ab7cf5fb264acb7d38c9354eb Reviewed-on: https://go-review.googlesource.com/32104 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-