- 17 Mar, 2016 18 commits
-
-
Keith Randall authored
Make sure we don't generate write barriers in runtime code that is marked to forbid write barriers. Implement the optimization that if we're writing a sliced slice back to the location it came from, we don't need a write barrier. Fixes #14784 Change-Id: I04b6a3b2ac303c19817e932a36a3b006de103aaa Reviewed-on: https://go-review.googlesource.com/20791Reviewed-by: Austin Clements <austin@google.com>
-
Shahar Kohanim authored
Reduces size of archives in pkg/linux_amd64 by 1.4MB (3.2%), slightly improving link time. name old s/op new s/op delta LinkCmdGo 0.52 ± 3% 0.51 ± 2% -0.65% (p=0.000 n=98+99) Change-Id: I7e265f4d4dd08967c5c5d55c1045e533466bbbec Reviewed-on: https://go-review.googlesource.com/20802 Run-TryBot: David Crawshaw <crawshaw@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Ingo Oeser authored
Change-Id: I32fd5c36f055fdb1dfe56524085676aa4111089a Reviewed-on: https://go-review.googlesource.com/20830Reviewed-by: David Chase <drchase@google.com>
-
Matthew Dempsky authored
Boolean expressions involving t.Thistuple were converted to use t.Recv(), because it's a bit clearer and will hopefully reveal cases where we could remove redundant calls to t.Recv() (in followup CLs). The other cases were all converted to use t.Recvs().NumFields(), t.Params().NumFields(), or t.Results().NumFields(). Change-Id: I4df91762e7dc4b2ddae35995f8dd604a52c09b09 Reviewed-on: https://go-review.googlesource.com/20796Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Matthew Dempsky <mdempsky@google.com>
-
Matthew Dempsky authored
We never need a type hash for a method type, so skip trying to overwrite Thistuple. Change-Id: I8de6480ba5fd321dfa134facf7661461d298840e Reviewed-on: https://go-review.googlesource.com/20795Reviewed-by: Russ Cox <rsc@golang.org>
-
Matthew Dempsky authored
This is an automated rewrite of all the calls of the form: for f, it := IterFields(t); f != nil; f = it.Next() { ... } Followup CLs will work on cleaning up the remaining cases. Change-Id: Ic1005ad45ae0b50c63e815e34e507e2d2644ba1a Reviewed-on: https://go-review.googlesource.com/20794Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Matthew Dempsky authored
Analogous to the Nodes type used as a more space efficient []*Node representation. Passes toolstash -cmp. Change-Id: I8341e45304777d6e4200bd36dadc935b07ccf3ff Reviewed-on: https://go-review.googlesource.com/20793Reviewed-by: David Crawshaw <crawshaw@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Change-Id: I85c0bcf02842cc32dbc9bfdcea27efe871173574 Reviewed-on: https://go-review.googlesource.com/20774Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
The compiler doesn't care about the runtime's sudog type. Stop constructing it. Change-Id: If1885fe30b2e215a08d17662eab5ea6d81fe58ab Reviewed-on: https://go-review.googlesource.com/20797 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Ian Lance Taylor authored
For every string constant the compiler was creating 2 Sym's and 2 Node's. It would never refer to them again, but would keep them alive in gostringpkg. This changes the code to just use obj.LSym's instead. When compiling x/tools/go/types, this yields about a 15% reduction in the number of calls to newname and a 3% reduction in the total number of Node objects. Unfortunately I couldn't see any change in compile time, but reducing memory usage is desirable anyhow. Passes toolstash -cmp. Change-Id: I24f1cb1e6cff0a3afba4ca66f7166874917a036b Reviewed-on: https://go-review.googlesource.com/20792Reviewed-by: David Crawshaw <crawshaw@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
David Chase authored
This should probably be considered "experimental" at this stage, but what it needs is feedback from adventurous adopters. I think the data structure used for describing escape reasons might be extendable to allow a cleanup of the underlying algorithms, which suffers from insufficiently separated concerns (the graph does not deal well with escape level adjustments, so it is augmented by a second custom-walk portion of the "flood" phase. It would be better to put it all, including level adjustments, in a single graph structure, and then simply flood the graph. Tweaked to avoid allocations in the no-logging case. Modified run.go to ignore lines with leading "#" in the output (since it can never match a line), and in -update_errors to ignore leading tabs in output lines and to normalize embedded filenames. Currently requires -m -m because otherwise the noise/update burden for the other escape tests is considerable. There is a partial test. Existing escape analysis tests seem to cover all except the panic case and what looks like it might be unreachable code in escape analysis. Fixes #10526. Change-Id: I2524fdec54facae48b00b2548e25d9e46fcaf832 Reviewed-on: https://go-review.googlesource.com/18041 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Todd Neal authored
If a phi has duplicate arguments, then the new block that is constructed to remove the critical edge can be used for all of the duplicate arguments. read-only data = -904 bytes (-0.058308%) global text (code) = -2240 bytes (-0.060056%) Total difference -3144 bytes (-0.056218%) Change-Id: Iee3762744d6a8c9d26cdfa880bb23feb62b03c9c Reviewed-on: https://go-review.googlesource.com/20746 Run-TryBot: Todd Neal <todd@tneal.org> Reviewed-by: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Richard Miller authored
Between the enumeration of fdsToClose in the parent and the closing of fds in the child, it's possible for a file to be closed in another thread. If that file descriptor is reused when opening the child-parent status pipe, it will be closed prematurely in the child and the forkExec gets out of sync. This has been observed to cause failures in builder tests when the link step of a build is started before the compile step has run, with "file does not exist" messages as the visible symptom. The simple workaround is to check against closing the pipe. A more comprehensive solution would be to rewrite the fd closing code to avoid races, along the lines of the long ago proposed https://golang.org/cl/57890043 - but meanwhile this correction will prevent some builder failures. Change-Id: I4ef5eaea70c21d00f4df0e0847a1c5b2966de7da Reviewed-on: https://go-review.googlesource.com/20800 Run-TryBot: David du Colombier <0intro@gmail.com> Reviewed-by: David du Colombier <0intro@gmail.com>
-
Martin Möhrmann authored
Separate unicode formatting into its own fmt_unicode function. Remove the fmtUnicode wrapper and the f.unicode and f.uniQuote flags that are not needed anymore. Remove mangling and restoring of the precision and sharp flags. Removes the buffer copy needed for %#U by moving the character encoding before the number encoding. Changes the behavior of plus and space flag to have no effect instead of printing a plus or space before "U+". Always print at least four digits after "U+" even if precision is set to less than 4. Change-Id: If9a0ee79e9eca2c76f06a4e0fdd75d98393899ac Reviewed-on: https://go-review.googlesource.com/20574 Run-TryBot: Rob Pike <r@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Keith Randall authored
Keep track of how many uses each Value has. Each appearance in Value.Args and in Block.Control counts once. The number of uses of a value is generically useful to constrain rewrite rules. For instance, we might want to prevent merging index operations into loads if the same index expression is used lots of times. But I have one use in particular for which the use count is required. We must make sure we don't combine ops with loads if the load has more than one use. Otherwise, we may split a single load into multiple loads and that breaks perceived behavior in the presence of races. In particular, the load of m.state in sync/mutex.go:Lock can't be done twice. (I have a separate CL which triggers the mutex failure. This CL has a test which demonstrates a similar failure.) Change-Id: Icaafa479239f48632a069d0c3f624e6ebc6b1f0e Reviewed-on: https://go-review.googlesource.com/20790 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Todd Neal <todd@tneal.org>
-
Dave Cheney authored
Spotted while splunking in the compiler with GOGC=off. name old time/op new time/op delta Template 407ms ± 5% 402ms ± 6% ~ (p=0.301 n=20+20) GoTypes 1.33s ± 2% 1.29s ± 1% -3.47% (p=0.000 n=20+20) Compiler 6.21s ± 1% 5.91s ± 2% -4.83% (p=0.000 n=20+20) name old alloc/op new alloc/op delta Template 66.8MB ± 0% 63.9MB ± 0% -4.46% (p=0.000 n=19+20) GoTypes 232MB ± 0% 220MB ± 0% -5.16% (p=0.000 n=19+17) Compiler 1.02GB ± 0% 0.97GB ± 0% -5.81% (p=0.000 n=20+20) name old allocs/op new allocs/op delta Template 789k ± 0% 708k ± 0% -10.28% (p=0.000 n=19+20) GoTypes 2.49M ± 0% 2.20M ± 0% -11.57% (p=0.000 n=20+20) Compiler 10.8M ± 0% 9.4M ± 0% -12.82% (p=0.000 n=20+20) Change-Id: I76615cab912dde10595ca6ab9979ff6c5f1aec49 Reviewed-on: https://go-review.googlesource.com/20782Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
-
Michael Hudson-Doyle authored
When building shared libraries, all symbols on Allsym are marked reachable. What I didn't realize was that this includes the ".dup" symbols created when "dupok" symbols are read from multiple package files. This breaks now because deadcode makes some assumptions that fail for these ".dup" symbols, but in any case was a bad idea -- I suspect this change makes libstd.so a bunch smaller, but creating it was broken before this CL so I can't be sure. This change simply stops adding these symbols to Allsym, which might make some of the many iterations over Allsym the linker does a touch quicker, although that's not the motivation here. Add a test that no symbols called ".dup" makes it into the runtime shared library. Fixes #14841 Change-Id: I65dd6e88d150a770db2d01b75cfe5db5fd4f8d25 Reviewed-on: https://go-review.googlesource.com/20780 Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Matthew Dempsky authored
Receiver parameters generally aren't relevant to the function signature type. In particular: 1. When checking whether a type's method implements an interface's method, we specifically want to ignore the receiver parameters, because they'll be different. 2. When checking interface type equality, interface methods always use the same "fakethis" *struct{} type as their receiver. 3. Finally, method expressions and method values degenerate into receiver-less function types. The only case where we care about receiver types matching is in addmethod, which is easily handled by adding an extra Eqtype check of the receiver parameters. Also, added a test for this, since (surprisingly) there weren't any. As precedence, go/types.Identical ignores receiver parameters when comparing go/types.Signature values. Notably, this allows us to slightly simplify the "implements" function, which is used for checking whether type/interface t implements interface iface. Currently, cmd/compile actually works around Eqtype's receiver parameter checking by creating new throwaway TFUNC Types without the receiver parameter. (Worse, the compiler currently only provides APIs to build TFUNC Types from Nod syntax trees, so building those throwaway types also involves first building throwaway syntax trees.) Passes toolstash -cmp. Change-Id: Ib07289c66feacee284e016bc312e8c5ff674714f Reviewed-on: https://go-review.googlesource.com/20602Reviewed-by: Robert Griesemer <gri@golang.org>
-
- 16 Mar, 2016 22 commits
-
-
Josh Bleecher Snyder authored
Follow-up to CL 20674. Passes toolstash -cmp. Change-Id: I065fd4cd80d996c1e6566773189401ca4630c1ca Reviewed-on: https://go-review.googlesource.com/20692 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Dave Cheney <dave@cheney.net>
-
Josh Bleecher Snyder authored
We can’t perfectly predict how large the function will be, but we can make a safe overestimate. No significant CPU time changes. name old alloc/op new alloc/op delta Template 67.7MB ± 0% 67.5MB ± 0% -0.24% (p=0.029 n=4+4) Unicode 43.9MB ± 0% 43.8MB ± 0% -0.13% (p=0.029 n=4+4) GoTypes 244MB ± 0% 244MB ± 0% -0.28% (p=0.029 n=4+4) Compiler 1.05GB ± 0% 1.05GB ± 0% -0.38% (p=0.029 n=4+4) name old allocs/op new allocs/op delta Template 795k ± 0% 794k ± 0% -0.14% (p=0.029 n=4+4) Unicode 569k ± 0% 569k ± 0% ~ (p=0.114 n=4+4) GoTypes 2.59M ± 0% 2.58M ± 0% -0.11% (p=0.029 n=4+4) Compiler 11.0M ± 0% 11.0M ± 0% -0.09% (p=0.029 n=4+4) Passes toolstash -cmp. Change-Id: I0a92ab04cba7520540ec58fe7189666d0e771454 Reviewed-on: https://go-review.googlesource.com/20771Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
-
Josh Bleecher Snyder authored
Passes toolstash -cmp. Change-Id: I77a415a4e5d8de7eb902fb0866aaf8783259485a Reviewed-on: https://go-review.googlesource.com/20770Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
James Bardin authored
Add a C.CBytes function to copy a Go byte slice into C memory. This returns an unsafe.Pointer, since that is what needs to be passed to C.free, and the data is often opaque bytes anyway. Fixes #14838 Change-Id: Ic7bc29637eb6f1f5ee409b3898c702a59833a85a Reviewed-on: https://go-review.googlesource.com/20762Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Austin Clements authored
Currently we generate write barriers when the right side of an assignment is a global function. This doesn't fall into the existing case of storing an address of a global because we haven't lowered the function to a pointer yet. This write barrier is unnecessary, so eliminate it. Fixes #13901. Change-Id: Ibc10e00a8803db0fd75224b66ab94c3737842a79 Reviewed-on: https://go-review.googlesource.com/20772 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
Passes toolstash -cmp. Change-Id: Ie11912a16d2cd54500e2f6e84316519b80e7c304 Reviewed-on: https://go-review.googlesource.com/20672Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
c2go translated writing and advancing a pointer using slices. Switch to something more idiomatic. It is also more efficient, but not enough to matter. Change-Id: I67709632ac53253615a35365824ae97bbe5458d5 Reviewed-on: https://go-review.googlesource.com/20767 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Josh Bleecher Snyder authored
Passes toolstash -cmp. Change-Id: I38eb507de2e9dc2cf01822e420bf31a91fb1b720 Reviewed-on: https://go-review.googlesource.com/20766 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
The parser code was not reachable ever since some of the lexer cleanups. We could recognize '~' in the lexer, complain, and return a '^' instead, but it's been a few years since Go was new and this may have been a use- ful error. The lexer complains with "illegal character U+007E '~'" which is good enough. For #13244. Change-Id: Ie3283738486eb6f8462d594f2728ac98333c0520 Reviewed-on: https://go-review.googlesource.com/20768Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Brad Fitzpatrick authored
Shrinks cmd/go by 30KB. Change-Id: Ied31192e85af76ebac743f8cc12bd9ef6ec5048f Reviewed-on: https://go-review.googlesource.com/20765Reviewed-by: Matthew Dempsky <mdempsky@google.com> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I0516d49ee8381c5e022d77c2fb41515c01c8a631 Reviewed-on: https://go-review.googlesource.com/20764Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Josh Bleecher Snyder authored
Use a local variable instead. Passes toolstash -cmp. Change-Id: I9623a40ff0d568f11afd1279b6aaa1c33eda644c Reviewed-on: https://go-review.googlesource.com/20730Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Instead, use a slice. Passes toolstash -cmp. Change-Id: I889fdb4ae997416f907522f549b96506be13bec7 Reviewed-on: https://go-review.googlesource.com/20699Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
It is unused. Passes toolstash -cmp. Change-Id: I22ae2bb432ce6be377dea43cf018ffccb6e95f37 Reviewed-on: https://go-review.googlesource.com/20698Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently we shrink stacks during STW mark termination because it used to be unsafe to shrink them concurrently. For some programs, this significantly increases pause time: stack shrinking costs ~5ms/MB copied plus 2µs/shrink. Now that we've made it safe to shrink a stack without the world being stopped, shrink them during the concurrent mark phase. This reduces the STW time in the program from issue #12967 by an order of magnitude and brings it from over the 10ms goal to well under: name old 95%ile-markTerm-time new 95%ile-markTerm-time delta Stackshrink-4 23.8ms ±60% 1.80ms ±39% -92.44% (p=0.008 n=5+5) Fixes #12967. This slows down the go1 and garbage benchmarks overall by < 0.5%. name old time/op new time/op delta XBenchGarbage-12 2.48ms ± 1% 2.49ms ± 1% +0.45% (p=0.005 n=25+21) name old time/op new time/op delta BinaryTree17-12 2.93s ± 2% 2.97s ± 2% +1.34% (p=0.002 n=19+20) Fannkuch11-12 2.51s ± 1% 2.59s ± 0% +3.09% (p=0.000 n=18+18) FmtFprintfEmpty-12 51.1ns ± 2% 51.5ns ± 1% ~ (p=0.280 n=20+17) FmtFprintfString-12 175ns ± 1% 169ns ± 1% -3.01% (p=0.000 n=20+20) FmtFprintfInt-12 160ns ± 1% 160ns ± 0% +0.53% (p=0.000 n=20+20) FmtFprintfIntInt-12 265ns ± 0% 266ns ± 1% +0.59% (p=0.000 n=20+20) FmtFprintfPrefixedInt-12 237ns ± 1% 238ns ± 1% +0.44% (p=0.000 n=20+20) FmtFprintfFloat-12 326ns ± 1% 341ns ± 1% +4.55% (p=0.000 n=20+19) FmtManyArgs-12 1.01µs ± 0% 1.02µs ± 0% +0.43% (p=0.000 n=20+19) GobDecode-12 8.41ms ± 1% 8.30ms ± 2% -1.22% (p=0.000 n=20+19) GobEncode-12 6.66ms ± 1% 6.68ms ± 0% +0.30% (p=0.000 n=18+19) Gzip-12 322ms ± 1% 322ms ± 1% ~ (p=1.000 n=20+20) Gunzip-12 42.8ms ± 0% 42.9ms ± 0% ~ (p=0.174 n=20+20) HTTPClientServer-12 69.7µs ± 1% 70.6µs ± 1% +1.20% (p=0.000 n=20+20) JSONEncode-12 16.8ms ± 0% 16.8ms ± 1% ~ (p=0.154 n=19+19) JSONDecode-12 65.1ms ± 0% 65.3ms ± 1% +0.34% (p=0.003 n=20+20) Mandelbrot200-12 3.93ms ± 0% 3.92ms ± 0% ~ (p=0.396 n=19+20) GoParse-12 3.66ms ± 1% 3.65ms ± 1% ~ (p=0.117 n=16+18) RegexpMatchEasy0_32-12 85.0ns ± 2% 85.5ns ± 2% ~ (p=0.143 n=20+20) RegexpMatchEasy0_1K-12 267ns ± 1% 267ns ± 1% ~ (p=0.867 n=20+17) RegexpMatchEasy1_32-12 83.3ns ± 2% 83.8ns ± 1% ~ (p=0.068 n=20+20) RegexpMatchEasy1_1K-12 432ns ± 1% 432ns ± 1% ~ (p=0.804 n=20+19) RegexpMatchMedium_32-12 133ns ± 0% 133ns ± 0% ~ (p=1.000 n=20+20) RegexpMatchMedium_1K-12 40.3µs ± 1% 40.4µs ± 1% ~ (p=0.319 n=20+19) RegexpMatchHard_32-12 2.10µs ± 1% 2.10µs ± 1% ~ (p=0.723 n=20+18) RegexpMatchHard_1K-12 63.0µs ± 0% 63.0µs ± 0% ~ (p=0.158 n=19+17) Revcomp-12 461ms ± 1% 476ms ± 8% +3.29% (p=0.002 n=20+20) Template-12 80.1ms ± 1% 79.3ms ± 1% -1.00% (p=0.000 n=20+20) TimeParse-12 360ns ± 0% 360ns ± 0% ~ (p=0.802 n=18+19) TimeFormat-12 374ns ± 1% 372ns ± 0% -0.77% (p=0.000 n=20+19) [Geo mean] 61.8µs 62.0µs +0.40% Change-Id: Ib60cd46b7a4987e07670eb271d22f6cee5802842 Reviewed-on: https://go-review.googlesource.com/20044Reviewed-by: Keith Randall <khr@golang.org>
-
Austin Clements authored
We're about to add another root marking job that needs to happen only during the first markroot pass (whether that's concurrent or STW), just like finalizer scanning. Rather than introducing another flag that has the same value as finalizersDone, just rename finalizersDone to markrootDone. Change-Id: I535356c6ea1f3734cb5b6add264cb7bf48de95e8 Reviewed-on: https://go-review.googlesource.com/20043Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently shinkstack is only safe during STW because it adjusts channel-related stack pointers and moves send/receive stack slots without synchronizing with the channel code. Make it safe to use when the world isn't stopped by: 1) Locking all channels the G is blocked on while adjusting the sudogs and copying the area of the stack that may contain send/receive slots. 2) For any stack frames that may contain send/receive slot, using an atomic CAS to adjust pointers to prevent races between adjusting a pointer in a receive slot and a concurrent send writing to that receive slot. In principle, the synchronization could be finer-grained. For example, we considered synchronizing around the sudogs, which would allow channel operations involving other Gs to continue if the G being shrunk was far enough down the send/receive queue. However, using the channel lock means no additional locks are necessary in the channel code. Furthermore, the stack shrinking code holds the channel lock for a very short time (much less than the time required to shrink the stack). This does not yet make stack shrinking concurrent; it merely makes doing so safe. This has negligible effect on the go1 and garbage benchmarks. For #12967. Change-Id: Ia49df3a8a7be4b36e365aac4155a2416b94b988c Reviewed-on: https://go-review.googlesource.com/20042Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently, locking a G's stack by setting its status to _Gcopystack or _Gscan is unordered with respect to channel locks. However, when we make stack shrinking concurrent, stack shrinking will need to lock the G and then acquire channel locks, which imposes an order on these. Document this lock ordering and fix closechan to respect it. Everything else already happens to respect it. For #12967. Change-Id: I4dd02675efffb3e7daa5285cf75bf24f987d90d4 Reviewed-on: https://go-review.googlesource.com/20041Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently sudog.elem is never accessed concurrently, so in several cases we drop the channel lock just before reading/writing the sent/received value from/to sudog.elem. However, concurrent stack shrinking is going to have to adjust sudog.elem to point to the new stack, which means it needs a way to synchronize with accesses to sudog.elem. Hence, add sudog.elem to the fields protected by hchan.lock and scoot the unlocks down past the uses of sudog.elem. While we're here, better document the channel synchronization rules. For #12967. Change-Id: I3ad0ca71f0a74b0716c261aef21b2f7f13f74917 Reviewed-on: https://go-review.googlesource.com/20040Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
With concurrent stack shrinking, the stack can move the instant after a G enters _Gwaiting. There are only two places that put a G into _Gwaiting: gopark and newstack. We fixed uses of gopark. This commit fixes newstack by simplifying its G transitions and, in particular, eliminating or narrowing the transient _Gwaiting states it passes through so it's clear nothing in the G is accessed while in _Gwaiting. For #12967. Change-Id: I2440ead411d2bc61beb1e2ab020ebe3cb3481af9 Reviewed-on: https://go-review.googlesource.com/20039Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
gopark calls the unlock function after setting the G to _Gwaiting. This means it's generally unsafe to access the G's stack from the unlock function because the G may start running on another P. Once we start shrinking stacks concurrently, a stack shrink could also move the stack the moment after it enters _Gwaiting and before the unlock function is called. Document this restriction and fix the two places where we currently violate it. This is unlikely to be a problem in practice for these two places right now, but they're already skating on thin ice. For example, the following sequence could in principle cause corruption, deadlock, or a panic in the select code: On M1/P1: 1. G1 selects on channels A and B. 2. selectgoImpl calls gopark. 3. gopark puts G1 in _Gwaiting. 4. gopark calls selparkcommit. 5. selparkcommit releases the lock on channel A. On M2/P2: 6. G2 sends to channel A. 7. The send puts G1 in _Grunnable and puts it on P2's run queue. 8. The scheduler runs, selects G1, puts it in _Grunning, and resumes G1. 9. On G1, the sellock immediately following the gopark gets called. 10. sellock grows and moves the stack. On M1/P1: 11. selparkcommit continues to scan the lock order for the next channel to unlock, but it's now reading from a freed (and possibly reused) stack. This shouldn't happen in practice because step 10 isn't the first call to sellock, so the stack should already be big enough. However, once we start shrinking stacks concurrently, this reasoning won't work any more. For #12967. Change-Id: I3660c5be37e5be9f87433cb8141bdfdf37fadc4c Reviewed-on: https://go-review.googlesource.com/20038Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently the g.waiting list created by a select is in poll order. However, nothing depends on this, and we're going to need access to the channel lock order in other places shortly, so modify select to put the waiting list in channel lock order. For #12967. Change-Id: If0d38816216ecbb37a36624d9b25dd96e0a775ec Reviewed-on: https://go-review.googlesource.com/20037Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-