- 19 Mar, 2015 8 commits
-
-
Josh Bleecher Snyder authored
Some type assertions of the form _, ok := i.(T) allow efficient inlining. Such type assertions commonly show up in type switches. For example, with this optimization, using 6g, the length of encoding/binary's intDataSize function shrinks from 2224 to 1728 bytes (-22%). benchmark old ns/op new ns/op delta BenchmarkAssertI2E2Blank 4.67 0.82 -82.44% BenchmarkAssertE2T2Blank 4.38 0.83 -81.05% BenchmarkAssertE2E2Blank 3.88 0.83 -78.61% BenchmarkAssertE2E2 14.2 14.4 +1.41% BenchmarkAssertE2T2 10.3 10.4 +0.97% BenchmarkAssertI2E2 13.4 13.3 -0.75% Change-Id: Ie9798c3e85432bb8e0f2c723afc376e233639df7 Reviewed-on: https://go-review.googlesource.com/7697Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
This is preliminary cleanup for another change. No functional changes. Passes toolstash -cmp. Change-Id: I11d562fbd6cba5c48d9636f3149e210e5f5308ad Reviewed-on: https://go-review.googlesource.com/7696Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Austin Clements authored
The distinction between gcWorkProducer and gcWork (producer and consumer) is not serving us as originally intended, so merge these into just gcWork. The original intent was to replace the currentwbuf cache with a gcWorkProducer. However, with gchelpwork (aka mutator assists), mutators can both produce and consume work, so it will make more sense to cache a whole gcWork. Change-Id: I6e633e96db7cb23a64fbadbfc4607e3ad32bcfb3 Reviewed-on: https://go-review.googlesource.com/7733Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently markroot fetches the wbuf to fill from the per-M wbuf cache. The wbuf cache is primarily meant for the write barrier because it produces very little work on each call. There's little point to using the cache in mark root, since each call to markroot is likely to produce a large amount of work (so the slight win on getting it from the cache instead of from the central wbuf lists doesn't matter), and markroot does not dispose the wbuf back to the cache (so most markroot calls won't get anything from the wbuf cache anyway). Instead, just get the wbuf from the central wbuf lists like other work producers. This will simplify later changes. Change-Id: I07a18a4335a41e266a6d70aa3a0911a40babce23 Reviewed-on: https://go-review.googlesource.com/7732Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, the GC's concurrent mark phase runs on the system stack. There's no need to do this, and running it this way ties up the entire M and P running the GC by preventing the scheduler from preempting the GC even during concurrent mark. Fix this by running concurrent mark on the regular G stack. It's still non-preemptible because we also set preemptoff around the whole GC process, but this moves us closer to making it preemptible. Change-Id: Ia9f1245e299b8c5c513a4b1e3ef13eaa35ac5e73 Reviewed-on: https://go-review.googlesource.com/7730Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
"Sync" is not very informative. What's being synchronized and with whom? Update this comment to explain what we're really doing: enabling write barriers. Change-Id: I4f0cbb8771988c7ba4606d566b77c26c64165f0f Reviewed-on: https://go-review.googlesource.com/7700Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we harvestwbufs the moment we enter the mark phase, even before starting the world again. Since cached wbufs are only filled when we're in mark or mark termination, they should all be empty at this point, making the harvest pointless. Remove the harvest. We should, but do not currently harvest at the end of the mark phase when we're running out of work to do. Change-Id: I5f4ba874f14dd915b8dfbc4ee5bb526eecc2c0b4 Reviewed-on: https://go-review.googlesource.com/7669Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Change-Id: I0ad1a81a235c7c067fea2093bbeac4e06a233c10 Reviewed-on: https://go-review.googlesource.com/7661Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 18 Mar, 2015 10 commits
-
-
Josh Bleecher Snyder authored
Change-Id: I5a49f56518adf7d64ba8610b51ea1621ad888fc4 Reviewed-on: https://go-review.googlesource.com/7771Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Switch statements do a binary search on long runs of constants. Doing a less-than comparison on a string is much more expensive than on (say) an int. Use two part comparison for strings: First compare length, then the strings themselves. Benchmarks from issue 10000: benchmark old ns/op new ns/op delta BenchmarkIf0 3.36 3.35 -0.30% BenchmarkIf1 4.45 4.47 +0.45% BenchmarkIf2 5.22 5.26 +0.77% BenchmarkIf3 5.56 5.58 +0.36% BenchmarkIf4 10.5 10.6 +0.95% BenchmarkIfNewStr0 5.26 5.30 +0.76% BenchmarkIfNewStr1 7.19 7.15 -0.56% BenchmarkIfNewStr2 7.23 7.16 -0.97% BenchmarkIfNewStr3 7.47 7.43 -0.54% BenchmarkIfNewStr4 12.4 12.2 -1.61% BenchmarkSwitch0 9.56 4.24 -55.65% BenchmarkSwitch1 8.64 5.58 -35.42% BenchmarkSwitch2 9.38 10.1 +7.68% BenchmarkSwitch3 8.66 5.00 -42.26% BenchmarkSwitch4 7.99 8.18 +2.38% BenchmarkSwitchNewStr0 11.3 6.12 -45.84% BenchmarkSwitchNewStr1 11.1 8.33 -24.95% BenchmarkSwitchNewStr2 11.0 11.1 +0.91% BenchmarkSwitchNewStr3 10.3 6.93 -32.72% BenchmarkSwitchNewStr4 11.0 11.2 +1.82% Fixes #10000 Change-Id: Ia2fffc32e9843425374c274064f709ec7ee46d80 Reviewed-on: https://go-review.googlesource.com/7698Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I79b7ed8f7e78e9d35b5e30ef70b98db64bc68a7b Reviewed-on: https://go-review.googlesource.com/7720Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Comment changes only. Change-Id: I56848814564c4aa0988b451df18bebdfc88d6d94 Reviewed-on: https://go-review.googlesource.com/7721Reviewed-by: Rob Pike <r@golang.org>
-
Dmitry Vyukov authored
One of my earlier versions of finer-grained select locking failed on this test. If you just naively lock and check channels one-by-one, it is possible that you skip over ready channels. Consider that initially c1 is ready and c2 is not. Select checks c2. Then another goroutine makes c1 not ready and c2 ready (in that order). Then select checks c1, concludes that no channels are ready and executes the default case. But there was no point in time when no channel is ready and so default case must not be executed. Change-Id: I3594bf1f36cfb120be65e2474794f0562aebcbbd Reviewed-on: https://go-review.googlesource.com/7550Reviewed-by: Russ Cox <rsc@golang.org>
-
Aaron Jacobs authored
Change-Id: I216511a4bce431de0a468f618a7a7c4da79e2979 Reviewed-on: https://go-review.googlesource.com/7710Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Adam Langley authored
RC4 is frowned upon[1] at this point and major providers are disabling it by default[2]. Those who still need RC4 support in crypto/tls can enable it by specifying the CipherSuites slice in crypto/tls.Config explicitly. Fixes #10094. [1] https://tools.ietf.org/html/rfc7465 [2] https://blog.cloudflare.com/killing-rc4-the-long-goodbye/ Change-Id: Ia03a456f7e7a4362b706392b0e3c4cc93ce06f9f Reviewed-on: https://go-review.googlesource.com/7647Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Adam Langley authored
Just so that we notice in the future if another hash function is added without updating this utility function, make it panic when passed an unknown handshake hash function. (Which should never happen.) Change-Id: I60a6fc01669441523d8c44e8fbe7ed435e7f04c8 Reviewed-on: https://go-review.googlesource.com/7646Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-by: Joël Stemmer <stemmertech@gmail.com>
-
Adam Langley authored
crypto/rand.Reader doesn't ensure that short reads don't happen. This change contains a couple of fixups where io.ReadFull wasn't being used with it. Change-Id: I3855b81f5890f2e703112eeea804aeba07b6a6b8 Reviewed-on: https://go-review.googlesource.com/7645Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Ian Lance Taylor authored
For example, "GOARCH=sparc go build -compiler=gccgo" should not crash merely because the architecture character for sparc is not known. Change-Id: I18912c7f5d90ef8f586592235ec9d6e5053e4bef Reviewed-on: https://go-review.googlesource.com/7695Reviewed-by: Russ Cox <rsc@golang.org>
-
- 17 Mar, 2015 22 commits
-
-
Robert Griesemer authored
Change-Id: I72e8389ec080be8a0119f98df898de6f5510fa4d Reviewed-on: https://go-review.googlesource.com/7693Reviewed-by: Alan Donovan <adonovan@google.com>
-
David Chase authored
Change-Id: I19e6542e7d79d60e39d62339da51a827c5aa6d3b Reviewed-on: https://go-review.googlesource.com/7668Reviewed-by: Russ Cox <rsc@golang.org>
-
Russ Cox authored
The value in question is really a bit pattern (a pointer with extra bits thrown in), so treat it as a uintptr instead, avoiding the generation of a write barrier when there might not be a p. Also add the obligatory //go:nowritebarrier. Change-Id: I4ea097945dd7093a140f4740bcadca3ce7191971 Reviewed-on: https://go-review.googlesource.com/7667Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Rick Hudson authored
The GC assumes that there will be no asynchronous write barriers when the world is stopped. This keeps the synchronization between write barriers and the GC simple. However, currently, there are a few places in runtime code where this assumption does not hold. The GC stops the world by collecting all Ps, which stops all user Go code, but small parts of the runtime can run without a P. For example, the code that releases a P must still deschedule its G onto a runnable queue before stopping. Similarly, when a G returns from a long-running syscall, it must run code to reacquire a P. Currently, this code can contain write barriers. This can lead to the GC collecting reachable objects if something like the following sequence of events happens: 1. GC stops the world by collecting all Ps. 2. G #1 returns from a syscall (for example), tries to install a pointer to object X, and calls greyobject on X. 3. greyobject on G #1 marks X, but does not yet add it to a write buffer. At this point, X is effectively black, not grey, even though it may point to white objects. 4. GC reaches X through some other path and calls greyobject on X, but greyobject does nothing because X is already marked. 5. GC completes. 6. greyobject on G #1 adds X to a work buffer, but it's too late. 7. Objects that were reachable only through X are incorrectly collected. To fix this, we check the invariant that no asynchronous write barriers happen when the world is stopped by checking that write barriers always have a P, and modify all currently known sources of these writes to disable the write barrier. In all modified cases this is safe because the object in question will always be reachable via some other path. Some of the trace code was turned off, in particular the code that traces returning from a syscall. The GC assumes that as far as the heap is concerned the thread is stopped when it is in a syscall. Upon returning the trace code must not do any heap writes for the same reasons discussed above. Fixes #10098 Fixes #9953 Fixes #9951 Fixes #9884 May relate to #9610 #9771 Change-Id: Ic2e70b7caffa053e56156838eb8d89503e3c0c8a Reviewed-on: https://go-review.googlesource.com/7504Reviewed-by: Austin Clements <austin@google.com>
-
David Crawshaw authored
Some versions of libc, in this case Android's bionic, point environ directly at the envp memory. https://android.googlesource.com/platform/bionic/+/master/libc/bionic/libc_init_common.cpp#104 The Go runtime does something surprisingly similar, building the runtime's envs []string using gostringnocopy. Both libc and the Go runtime reusing memory interacts badly. When syscall.Setenv uses cgo to call setenv(3), C modifies the underlying memory of a Go string. This manifests on android/arm. With GOROOT=/data/local/tmp, a runtime test calls syscall.Setenv("/os"), resulting in runtime.GOROOT()=="/os\x00a/local/tmp/goroot". Avoid this by copying environment string memory into Go. Covered by runtime.TestFixedGOROOT on android/arm. Change-Id: Id0cf9553969f587addd462f2239dafca1cf371fa Reviewed-on: https://go-review.googlesource.com/7663Reviewed-by: Keith Randall <khr@golang.org>
-
Robert Griesemer authored
Fixed several corner-case bugs and added corresponding tests. Change-Id: I23096b9caeeff0956f65ab59fa91e168d0e47bb8 Reviewed-on: https://go-review.googlesource.com/7001Reviewed-by: Alan Donovan <adonovan@google.com>
-
Dmitry Vyukov authored
IRIW requires 4 threads: first writes x, second writes y, third reads x and y, fourth reads y and x. This is Peterson/Dekker mutual exclusion algorithm based on critical store-load sequences: http://en.wikipedia.org/wiki/Dekker's_algorithm http://en.wikipedia.org/wiki/Peterson%27s_algorithm Change-Id: I30a00865afbe895f7617feed4559018f81ff4528 Reviewed-on: https://go-review.googlesource.com/7561Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Dmitry Vyukov authored
Channels and sync.Mutex'es allow another goroutine to acquire resource ahead of an unblocked goroutine. This is good for performance, but leads to futile wakeups (the unblocked goroutine needs to block again). Futile wakeups caused user confusion during the very first evaluation of tracing functionality on a real server (a goroutine as if acquires a mutex in a loop, while there is no loop in user code). This change detects futile wakeups on channels and emits a special event to denote the fact. Later parser finds entire wakeup sequences (unblock->start->block) and removes them. sync.Mutex will be supported in a separate change. Change-Id: Iaaaee9d5c0921afc62b449a97447445030ac19d3 Reviewed-on: https://go-review.googlesource.com/7380Reviewed-by: Keith Randall <khr@golang.org>
-
David Crawshaw authored
The Go builders (and standard development cycle) for programs on iOS require running the programs under lldb. Unfortunately lldb intercepts SIGSEGV and will not give it back. https://llvm.org/bugs/show_bug.cgi?id=22868 We get around this by never letting lldb see the SIGSEGV. On darwin, Unix signals are emulated on top of mach exceptions. The debugger registers a task-level mach exception handler. We register a thread-level exception handler which acts as a faux signal handler. The thread-level handler gets precedence over the task-level handler, so we can turn the exception EXC_BAD_ACCESS into a panic before lldb can see it. Fixes #10043 Change-Id: I64d7c310dfa7ecf60eb1e59f094966520d473335 Reviewed-on: https://go-review.googlesource.com/7072Reviewed-by: Minux Ma <minux@golang.org> Run-TryBot: David Crawshaw <crawshaw@golang.org>
-
Dave Cheney authored
Fix recover4.go to work on 64kb systems. Change-Id: I211cb048de1268a8bbac77c6f3a1e0b8c8277594 Reviewed-on: https://go-review.googlesource.com/7673Reviewed-by: Minux Ma <minux@golang.org>
-
Jeremy Jackins authored
Change-Id: I367b5a837844e3bee1576c59497d37f5e67c761d Reviewed-on: https://go-review.googlesource.com/7674Reviewed-by: Minux Ma <minux@golang.org>
-
Dave Cheney authored
This reverts commit 1313e798. Change-Id: I96cc58baf71156fdfbf8fd61332744bcc3ea52e5 Reviewed-on: https://go-review.googlesource.com/7670Reviewed-by: Dave Cheney <dave@cheney.net>
-
Dave Cheney authored
Updates #10180 Temporarily disable this test on ppc64 systems as all our builders use 64k page size. We need a portable way to get the page size of the host so we can correctly size the mmap hole. Change-Id: Ibd36ebe2f54cf75a44667e2070c385f0daaca481 Reviewed-on: https://go-review.googlesource.com/7652Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Austin Clements authored
When checkmark fails, greyobject dumps both the object that pointed to the unmarked object and the unmarked object. This code cluttered up greyobject, was copy-pasted for the two objects, and the copy for dumping the unmarked object was not entirely correct. Extract object dumping out to a new function. This declutters greyobject and fixes the bugs in dumping the unmarked object. The new function is slightly cleaned up from the original code to have more natural control flow and shows a marker on the field in the base object that points to the unmarked object to make it easy to find. Change-Id: Ib51318a943f50b0b99995f0941d03ee8876b9fcf Reviewed-on: https://go-review.googlesource.com/7506Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
scanobject no longer returns the new wbuf. Change-Id: I0da335ae5cd7ef7ea0e0fa965cf0e9f3a650d0e6 Reviewed-on: https://go-review.googlesource.com/7505Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
This directory is processed by mkbuiltin.go and generates builtin.go. It should be named builtin too, not builtins, both for consistency and because file and directory names in general are singular unless forced otherwise. Commented on CL 6233 too. Change-Id: Ic5d3671443ae9292b69fda118f61a11c88d823fa Reviewed-on: https://go-review.googlesource.com/7660Reviewed-by: Minux Ma <minux@golang.org>
-
Russ Cox authored
Also replace proginfo call with cheaper calls where only flags are needed. Change-Id: Ib6e5c12bd8752b87c0d8bcf22fa9e25e04a7941f Reviewed-on: https://go-review.googlesource.com/7630Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
- avoid copy in range ytab - add fast path to prefixof Change-Id: I88aa9d91a0abe80d253f7c3bca950b4613297499 Reviewed-on: https://go-review.googlesource.com/7628 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
Change-Id: Iaf5a7d25e6308b32c17a38afbbd46befa17aa3a4 Reviewed-on: https://go-review.googlesource.com/7629Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
These were introduced during C -> Go translation when the loop increment contained multiple statements. Change-Id: Ic8abd8dcb3308851a1f7024de00711f0f984e684 Reviewed-on: https://go-review.googlesource.com/7627Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Rob Pike <r@golang.org>
-
Russ Cox authored
Change-Id: I18f2e2ee141ebb65a8579ee1e440cb9c2069ef86 Reviewed-on: https://go-review.googlesource.com/7626Reviewed-by: Rob Pike <r@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Russ Cox authored
Substituting in multiple passes meant walking the type multiple times, and worse, if a complex type was substituted in an early pass, later passes would follow it, possibly recursively, until hitting the depth 10 limit. Change-Id: Ie61d6ec08438e297baabe932afe33d08f358e55f Reviewed-on: https://go-review.googlesource.com/7625Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Rob Pike <r@golang.org>
-