- 17 Feb, 2017 8 commits
-
-
Cherry Zhang authored
These seem not to really matter, but good to be correct. Change-Id: I02edb9797c3d6739725cfbe4723c75f151acd05e Reviewed-on: https://go-review.googlesource.com/36837 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Cherry Zhang authored
SSA's writebarrier pass requires WB store ops are always at the end of a block. If we move write barrier insertion into SSA and emits normal Store ops when building SSA, this requirement becomes impractical -- it will create too many blocks for all the Store ops. Redo SSA's writebarrier pass, explicitly order values in store order, so it no longer needs this requirement. Updates #17583. Fixes #19067. Change-Id: I66e817e526affb7e13517d4245905300a90b7170 Reviewed-on: https://go-review.googlesource.com/36834 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
-
Cherry Zhang authored
Nil check removal in the same block is disabled due to issue 18725: because the values are not ordered, a nilcheck may influence a value that is logically before it. This CL re-enables same-block nilcheck removal by ordering values in store order first. Updates #18725. Change-Id: I287a38525230c14c5412cbcdbc422547dabd54f6 Reviewed-on: https://go-review.googlesource.com/35496 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
-
Robert Griesemer authored
Follow-up on https://go-review.googlesource.com/36315. No functionality change. For #18616. Change-Id: Id4df34dd7d0381be06eea483a11bf92f4a01f604 Reviewed-on: https://go-review.googlesource.com/37140Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Koki Ide authored
Change-Id: I0455ffaa51c661803d8013c7961910f920d3c3cc Reviewed-on: https://go-review.googlesource.com/37043Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Dmitry Vyukov authored
Add new starvation mode for Mutex. In starvation mode ownership is directly handed off from unlocking goroutine to the next waiter. New arriving goroutines don't compete for ownership. Unfair wait time is now limited to 1ms. Also fix a long standing bug that goroutines were requeued at the tail of the wait queue. That lead to even more unfair acquisition times with multiple waiters. Performance of normal mode is not considerably affected. Fixes #13086 On the provided in the issue lockskew program: done in 1.207853ms done in 1.177451ms done in 1.184168ms done in 1.198633ms done in 1.185797ms done in 1.182502ms done in 1.316485ms done in 1.211611ms done in 1.182418ms name old time/op new time/op delta MutexUncontended-48 0.65ns ± 0% 0.65ns ± 1% ~ (p=0.087 n=10+10) Mutex-48 112ns ± 1% 114ns ± 1% +1.69% (p=0.000 n=10+10) MutexSlack-48 113ns ± 0% 87ns ± 1% -22.65% (p=0.000 n=8+10) MutexWork-48 149ns ± 0% 145ns ± 0% -2.48% (p=0.000 n=9+10) MutexWorkSlack-48 149ns ± 0% 122ns ± 3% -18.26% (p=0.000 n=6+10) MutexNoSpin-48 103ns ± 4% 105ns ± 3% ~ (p=0.089 n=10+10) MutexSpin-48 490ns ± 4% 515ns ± 6% +5.08% (p=0.006 n=10+10) Cond32-48 13.4µs ± 6% 13.1µs ± 5% -2.75% (p=0.023 n=10+10) RWMutexWrite100-48 53.2ns ± 3% 41.2ns ± 3% -22.57% (p=0.000 n=10+10) RWMutexWrite10-48 45.9ns ± 2% 43.9ns ± 2% -4.38% (p=0.000 n=10+10) RWMutexWorkWrite100-48 122ns ± 2% 134ns ± 1% +9.92% (p=0.000 n=10+10) RWMutexWorkWrite10-48 206ns ± 1% 188ns ± 1% -8.52% (p=0.000 n=8+10) Cond32-24 12.1µs ± 3% 12.4µs ± 3% +1.98% (p=0.043 n=10+9) MutexUncontended-24 0.74ns ± 1% 0.75ns ± 1% ~ (p=0.650 n=10+10) Mutex-24 122ns ± 2% 124ns ± 1% +1.31% (p=0.007 n=10+10) MutexSlack-24 96.9ns ± 2% 102.8ns ± 2% +6.11% (p=0.000 n=10+10) MutexWork-24 146ns ± 1% 135ns ± 2% -7.70% (p=0.000 n=10+9) MutexWorkSlack-24 135ns ± 1% 128ns ± 2% -5.01% (p=0.000 n=10+9) MutexNoSpin-24 114ns ± 3% 110ns ± 4% -3.84% (p=0.000 n=10+10) MutexSpin-24 482ns ± 4% 475ns ± 8% ~ (p=0.286 n=10+10) RWMutexWrite100-24 43.0ns ± 3% 43.1ns ± 2% ~ (p=0.956 n=10+10) RWMutexWrite10-24 43.4ns ± 1% 43.2ns ± 1% ~ (p=0.085 n=10+9) RWMutexWorkWrite100-24 130ns ± 3% 131ns ± 3% ~ (p=0.747 n=10+10) RWMutexWorkWrite10-24 191ns ± 1% 192ns ± 1% ~ (p=0.210 n=10+10) Cond32-12 11.5µs ± 2% 11.7µs ± 2% +1.98% (p=0.002 n=10+10) MutexUncontended-12 1.48ns ± 0% 1.50ns ± 1% +1.08% (p=0.004 n=10+10) Mutex-12 141ns ± 1% 143ns ± 1% +1.63% (p=0.000 n=10+10) MutexSlack-12 121ns ± 0% 119ns ± 0% -1.65% (p=0.001 n=8+9) MutexWork-12 141ns ± 2% 150ns ± 3% +6.36% (p=0.000 n=9+10) MutexWorkSlack-12 131ns ± 0% 138ns ± 0% +5.73% (p=0.000 n=9+10) MutexNoSpin-12 87.0ns ± 1% 83.7ns ± 1% -3.80% (p=0.000 n=10+10) MutexSpin-12 364ns ± 1% 377ns ± 1% +3.77% (p=0.000 n=10+10) RWMutexWrite100-12 42.8ns ± 1% 43.9ns ± 1% +2.41% (p=0.000 n=8+10) RWMutexWrite10-12 39.8ns ± 4% 39.3ns ± 1% ~ (p=0.433 n=10+9) RWMutexWorkWrite100-12 131ns ± 1% 131ns ± 0% ~ (p=0.591 n=10+9) RWMutexWorkWrite10-12 173ns ± 1% 174ns ± 0% ~ (p=0.059 n=10+8) Cond32-6 10.9µs ± 2% 10.9µs ± 2% ~ (p=0.739 n=10+10) MutexUncontended-6 2.97ns ± 0% 2.97ns ± 0% ~ (all samples are equal) Mutex-6 122ns ± 6% 122ns ± 2% ~ (p=0.668 n=10+10) MutexSlack-6 149ns ± 3% 142ns ± 3% -4.63% (p=0.000 n=10+10) MutexWork-6 136ns ± 3% 140ns ± 5% ~ (p=0.077 n=10+10) MutexWorkSlack-6 152ns ± 0% 138ns ± 2% -9.21% (p=0.000 n=6+10) MutexNoSpin-6 150ns ± 1% 152ns ± 0% +1.50% (p=0.000 n=8+10) MutexSpin-6 726ns ± 0% 730ns ± 1% ~ (p=0.069 n=10+10) RWMutexWrite100-6 40.6ns ± 1% 40.9ns ± 1% +0.91% (p=0.001 n=8+10) RWMutexWrite10-6 37.1ns ± 0% 37.0ns ± 1% ~ (p=0.386 n=9+10) RWMutexWorkWrite100-6 133ns ± 1% 134ns ± 1% +1.01% (p=0.005 n=9+10) RWMutexWorkWrite10-6 152ns ± 0% 152ns ± 0% ~ (all samples are equal) Cond32-2 7.86µs ± 2% 7.95µs ± 2% +1.10% (p=0.023 n=10+10) MutexUncontended-2 8.10ns ± 0% 9.11ns ± 4% +12.44% (p=0.000 n=9+10) Mutex-2 32.9ns ± 9% 38.4ns ± 6% +16.58% (p=0.000 n=10+10) MutexSlack-2 93.4ns ± 1% 98.5ns ± 2% +5.39% (p=0.000 n=10+9) MutexWork-2 40.8ns ± 3% 43.8ns ± 7% +7.38% (p=0.000 n=10+9) MutexWorkSlack-2 98.6ns ± 5% 108.2ns ± 2% +9.80% (p=0.000 n=10+8) MutexNoSpin-2 399ns ± 1% 398ns ± 2% ~ (p=0.463 n=8+9) MutexSpin-2 1.99µs ± 3% 1.97µs ± 1% -0.81% (p=0.003 n=9+8) RWMutexWrite100-2 37.6ns ± 5% 46.0ns ± 4% +22.17% (p=0.000 n=10+8) RWMutexWrite10-2 50.1ns ± 6% 36.8ns ±12% -26.46% (p=0.000 n=9+10) RWMutexWorkWrite100-2 136ns ± 0% 134ns ± 2% -1.80% (p=0.001 n=7+9) RWMutexWorkWrite10-2 140ns ± 1% 138ns ± 1% -1.50% (p=0.000 n=10+10) Cond32 5.93µs ± 1% 5.91µs ± 0% ~ (p=0.411 n=9+10) MutexUncontended 15.9ns ± 0% 15.8ns ± 0% -0.63% (p=0.000 n=8+8) Mutex 15.9ns ± 0% 15.8ns ± 0% -0.44% (p=0.003 n=10+10) MutexSlack 26.9ns ± 3% 26.7ns ± 2% ~ (p=0.084 n=10+10) MutexWork 47.8ns ± 0% 47.9ns ± 0% +0.21% (p=0.014 n=9+8) MutexWorkSlack 54.9ns ± 3% 54.5ns ± 3% ~ (p=0.254 n=10+10) MutexNoSpin 786ns ± 2% 765ns ± 1% -2.66% (p=0.000 n=10+10) MutexSpin 3.87µs ± 1% 3.83µs ± 0% -0.85% (p=0.005 n=9+8) RWMutexWrite100 21.2ns ± 2% 21.0ns ± 1% -0.88% (p=0.018 n=10+9) RWMutexWrite10 22.6ns ± 1% 22.6ns ± 0% ~ (p=0.471 n=9+9) RWMutexWorkWrite100 132ns ± 0% 132ns ± 0% ~ (all samples are equal) RWMutexWorkWrite10 124ns ± 0% 123ns ± 0% ~ (p=0.656 n=10+10) Change-Id: I66412a3a0980df1233ad7a5a0cd9723b4274528b Reviewed-on: https://go-review.googlesource.com/34310 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Wander Lairson Costa authored
If the caller set ups a Credential in os/exec.Command, os/exec.Command.Start will end up calling setgroups(2), even if no supplementary groups were given. Only root can call setgroups(2) on BSD kernels, which causes Start to fail for non-root users when they try to set uid and gid for the new process. We fix by introducing a new field to syscall.Credential named NoSetGroups, and setgroups(2) is only called if it is false. We make this field with inverted logic to preserve backward compatibility. RELNOTES=yes Change-Id: I3cff1f21c117a1430834f640ef21fd4e87e06804 Reviewed-on: https://go-review.googlesource.com/36697Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Keith Randall authored
Currently the conversion from constant divides to multiplies is mostly done during the walk pass. This is suboptimal because SSA can determine that the value being divided by is constant more often (e.g. after inlining). Change-Id: If1a9b993edd71be37396b9167f77da271966f85f Reviewed-on: https://go-review.googlesource.com/37015 Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
- 16 Feb, 2017 12 commits
-
-
Matthew Dempsky authored
Currently, whether we need a write barrier is simply a property of the pointer slot being written to. The only optimization we currently apply using the value being written is that pointers to stack variables can omit write barriers because they're only written to stack slots... but we already omit write barriers for all writes to the stack anyway. Passes toolstash -cmp. Change-Id: I7f16b71ff473899ed96706232d371d5b2b7ae789 Reviewed-on: https://go-review.googlesource.com/37109Reviewed-by: Cherry Zhang <cherryyz@google.com> Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Shenghou Ma authored
While we're at it, also document Yn(0, 0) = -Inf for completeness. Fixes #18823. Change-Id: Ib6db68f76d29cc2373c12ebdf3fab129cac8c167 Reviewed-on: https://go-review.googlesource.com/35970Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
Initial platform-independent implementation. For #18616. Change-Id: I4585c55b963101af9059c06c1b8a866cb384754c Reviewed-on: https://go-review.googlesource.com/36315Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Robert Griesemer authored
Fixes #15611. Change-Id: I352b145026466cafef8cf87addafbd30716bda24 Reviewed-on: https://go-review.googlesource.com/37138 Run-TryBot: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Russ Cox authored
CL 36792 fixed #17953, a linear scan caused by n goroutines piling into two different locks that hashed to the same bucket in the semaphore table. In that CL, n goroutines contending for 2 unfortunately chosen locks went from O(n²) to O(n). This CL fixes a different linear scan, when n goroutines are contending for n/2 different locks that all hash to the same bucket in the semaphore table. In this CL, n goroutines contending for n/2 unfortunately chosen locks goes from O(n²) to O(n log n). This case is much less likely, but any linear scan eventually hurts, so we might as well fix it while the problem is fresh in our minds. The new test in this CL checks for both linear scans. The effect of this CL on the sync benchmarks is negligible (but it fixes the new test). name old time/op new time/op delta Cond1-48 576ns ±10% 575ns ±13% ~ (p=0.679 n=71+71) Cond2-48 1.59µs ± 8% 1.61µs ± 9% ~ (p=0.107 n=73+69) Cond4-48 4.56µs ± 7% 4.55µs ± 7% ~ (p=0.670 n=74+72) Cond8-48 9.87µs ± 9% 9.90µs ± 7% ~ (p=0.507 n=69+73) Cond16-48 20.4µs ± 7% 20.4µs ±10% ~ (p=0.588 n=69+71) Cond32-48 45.4µs ±10% 45.4µs ±14% ~ (p=0.944 n=73+73) UncontendedSemaphore-48 19.7ns ±12% 19.7ns ± 8% ~ (p=0.589 n=65+63) ContendedSemaphore-48 55.4ns ±26% 54.9ns ±32% ~ (p=0.441 n=75+75) MutexUncontended-48 0.63ns ± 0% 0.63ns ± 0% ~ (all equal) Mutex-48 210ns ± 6% 213ns ±10% +1.30% (p=0.035 n=70+74) MutexSlack-48 210ns ± 7% 211ns ± 9% ~ (p=0.184 n=71+72) MutexWork-48 299ns ± 5% 300ns ± 5% ~ (p=0.678 n=73+75) MutexWorkSlack-48 302ns ± 6% 300ns ± 5% ~ (p=0.149 n=74+72) MutexNoSpin-48 135ns ± 6% 135ns ±10% ~ (p=0.788 n=67+75) MutexSpin-48 693ns ± 5% 689ns ± 6% ~ (p=0.092 n=65+74) Once-48 0.22ns ±25% 0.22ns ±24% ~ (p=0.882 n=74+73) Pool-48 5.88ns ±36% 5.79ns ±24% ~ (p=0.655 n=69+69) PoolOverflow-48 4.79µs ±18% 4.87µs ±20% ~ (p=0.233 n=75+75) SemaUncontended-48 0.80ns ± 1% 0.82ns ± 8% +2.46% (p=0.000 n=60+74) SemaSyntNonblock-48 103ns ± 4% 102ns ± 5% -1.11% (p=0.003 n=75+75) SemaSyntBlock-48 104ns ± 4% 104ns ± 5% ~ (p=0.231 n=71+75) SemaWorkNonblock-48 128ns ± 4% 129ns ± 6% +1.51% (p=0.000 n=63+75) SemaWorkBlock-48 129ns ± 8% 130ns ± 7% ~ (p=0.072 n=75+74) RWMutexUncontended-48 2.35ns ± 1% 2.35ns ± 0% ~ (p=0.144 n=70+55) RWMutexWrite100-48 139ns ±18% 141ns ±21% ~ (p=0.071 n=75+73) RWMutexWrite10-48 145ns ± 9% 145ns ± 8% ~ (p=0.553 n=75+75) RWMutexWorkWrite100-48 297ns ±13% 297ns ±15% ~ (p=0.519 n=75+74) RWMutexWorkWrite10-48 588ns ± 7% 585ns ± 5% ~ (p=0.173 n=73+70) WaitGroupUncontended-48 0.87ns ± 0% 0.87ns ± 0% ~ (all equal) WaitGroupAddDone-48 63.2ns ± 4% 62.7ns ± 4% -0.82% (p=0.027 n=72+75) WaitGroupAddDoneWork-48 109ns ± 5% 109ns ± 4% ~ (p=0.233 n=75+75) WaitGroupWait-48 0.17ns ± 0% 0.16ns ±16% -8.55% (p=0.000 n=56+75) WaitGroupWaitWork-48 1.78ns ± 1% 2.08ns ± 5% +16.92% (p=0.000 n=74+70) WaitGroupActuallyWait-48 52.0ns ± 3% 50.6ns ± 5% -2.70% (p=0.000 n=71+69) https://perf.golang.org/search?q=upload:20170215.1 Change-Id: Ia29a8bd006c089e401ec4297c3038cca656bcd0a Reviewed-on: https://go-review.googlesource.com/37103 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Matthew Dempsky authored
Passes toolstash -cmp. Change-Id: I037278404ebf762482557e2b6867cbc595074a83 Reviewed-on: https://go-review.googlesource.com/37023 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Russ Cox authored
Suggested by Dmitry in CL 36792 review. Clearly safe since there are many different semaRoots that could all have profiled sudogs calling mutexevent. Change-Id: I45eed47a5be3e513b2dad63b60afcd94800e16d1 Reviewed-on: https://go-review.googlesource.com/37104 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Russ Cox authored
Also runs 100X faster on average, because it takes so many fewer attempts to trigger the failure. Fixes #11443. Change-Id: I8c39ee48bb3ff6c36fa63083e04076771b65a80d Reviewed-on: https://go-review.googlesource.com/36841 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Chris Broadfoot authored
Change-Id: Ie2144d001c6b4b2293d07b2acf62d7e3cd0b46a7 Reviewed-on: https://go-review.googlesource.com/37130Reviewed-by: Russ Cox <rsc@golang.org>
-
Alex Brainman authored
For #10776. Change-Id: Id64a7e35c7cdcd9be16cbe3358402fa379090e36 Reviewed-on: https://go-review.googlesource.com/36975Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Alex Brainman authored
This is what gcc does when it generates object files. And it is easier to count everything, when it starts from 0. Make go linker do the same. gcc also does not output IMAGE_OPTIONAL_HEADER or PE64_IMAGE_OPTIONAL_HEADER for object files. Perhaps we should do the same, but not in this CL. For #10776. Change-Id: I9789c337648623b6cfaa7d18d1ac9cef32e180dc Reviewed-on: https://go-review.googlesource.com/36974Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Alex Brainman authored
For #10776. Change-Id: I7931558257c1f6b895e4d44b46d320a54de0d677 Reviewed-on: https://go-review.googlesource.com/36973 Run-TryBot: Alex Brainman <alex.brainman@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 15 Feb, 2017 17 commits
-
-
Matthew Dempsky authored
Change-Id: I78ca43a0f0a6a162a2ade1352e2facb29432d4ac Reviewed-on: https://go-review.googlesource.com/37102 Run-TryBot: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Matthew Dempsky authored
No behavior change. Change-Id: I595c15ee976adf21bdbabdf24edf203c9e446185 Reviewed-on: https://go-review.googlesource.com/36958Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Ian Lance Taylor authored
Fixes #19114. Change-Id: I352add53d6ee8bf78792564225099f8537ac6b46 Reviewed-on: https://go-review.googlesource.com/37106 Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: David du Colombier <0intro@gmail.com>
-
Sarah Adams authored
This change removes the punitive language and anonymous reporting mechanism from the Code of Conduct document. Read on for the rationale. More than a year has passed since the Go Code of Conduct was introduced. In that time, there have been a small number (<30) of reports to the Working Group. Some reports we handled well, with positive outcomes for all involved. A few reports we handled badly, resulting in hurt feelings and a bad experience for all involved. On reflection, the reports that had positive outcomes were ones where the Working Group took the role of advisor/facilitator, listening to complaints and providing suggestions and advice to the parties involved. The reports that had negative outcomes were ones where the subject of the report felt threatened by the Working Group and Code of Conduct. After some discussion among the Working Group, we saw that we are most effective as facilitators, rather than disciplinarians. The various Go spaces already have moderators; this change to the CoC acknowledges their authority and places the group in a purely advisory role. If an incident is reported to the group we may provide information to or make a suggestion the moderators, but the Working Group need not (and should not) have any authority to take disciplinary action. In short, we want it to be clear that the Working Group are here to help resolve conflict, period. The second change made here is the removal of the anonymous reporting mechanism. To date, the quality of anonymous reports has been low, and with no way to reach out to the reporter for more information there is often very little we can do in response. Removing this one-way reporting mechanism strengthens the message that the Working Group are here to facilitate a constructive dialogue. Change-Id: Iee52aff5446accd0dae0c937bb3aa89709ad5fb4 Reviewed-on: https://go-review.googlesource.com/37014Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Ian Lance Taylor authored
I don't know why it is not working. Filed issue 19111 for this. Fixes build. Update #19111. Change-Id: I76f8d6aafba5951da2f3ad7d10960419cca7dd1f Reviewed-on: https://go-review.googlesource.com/37092Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Ian Lance Taylor authored
It can't work since Plan 9 does not support the runtime poller. Fixes build. Change-Id: I9ec33eb66019d9364c6ff6519b61b32e59498559 Reviewed-on: https://go-review.googlesource.com/37091 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Russ Cox authored
We have seen one instance of a production job suddenly spinning to 100% CPU and becoming unresponsive. In that one instance, a SIGQUIT was sent after 328 minutes of spinning, and the stacks showed a single goroutine in "IO wait (scan)" state. Looking for things that might get stuck if a goroutine got stuck in scanning a stack, we found that injectglist does: lock(&sched.lock) var n int for n = 0; glist != nil; n++ { gp := glist glist = gp.schedlink.ptr() casgstatus(gp, _Gwaiting, _Grunnable) globrunqput(gp) } unlock(&sched.lock) and that casgstatus spins on gp.atomicstatus until the _Gscan bit goes away. Essentially, this code locks sched.lock and then while holding sched.lock, waits to lock gp.atomicstatus. The code that is doing the scan is: if castogscanstatus(gp, s, s|_Gscan) { if !gp.gcscandone { scanstack(gp, gcw) gp.gcscandone = true } restartg(gp) break loop } More analysis showed that scanstack can, in a rare case, end up calling back into code that acquires sched.lock. For example: runtime.scanstack at proc.go:866 calls runtime.gentraceback at mgcmark.go:842 calls runtime.scanstack$1 at traceback.go:378 calls runtime.scanframeworker at mgcmark.go:819 calls runtime.scanblock at mgcmark.go:904 calls runtime.greyobject at mgcmark.go:1221 calls (*runtime.gcWork).put at mgcmark.go:1412 calls (*runtime.gcControllerState).enlistWorker at mgcwork.go:127 calls runtime.wakep at mgc.go:632 calls runtime.startm at proc.go:1779 acquires runtime.sched.lock at proc.go:1675 This path was found with an automated deadlock-detecting tool. There are many such paths but they all go through enlistWorker -> wakep. The evidence strongly suggests that one of these paths is what caused the deadlock we observed. We're running those jobs with GOTRACEBACK=crash now to try to get more information if it happens again. Further refinement and analysis shows that if we drop the wakep call from enlistWorker, the remaining few deadlock cycles found by the tool are all false positives caused by not understanding the effect of calls to func variables. The enlistWorker -> wakep call was intended only as a performance optimization, it rarely executes, and if it does execute at just the wrong time it can (and plausibly did) cause the deadlock we saw. Comment it out, to avoid the potential deadlock. Fixes #19112. Unfixes #14179. Change-Id: I6f7e10b890b991c11e79fab7aeefaf70b5d5a07b Reviewed-on: https://go-review.googlesource.com/37093 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Hana Kim authored
in heap profile with debug mode Change-Id: I3a80d03a4aa556614626067a8fd698b3b00f4290 Reviewed-on: https://go-review.googlesource.com/36962Reviewed-by: Austin Clements <austin@google.com>
-
Heschi Kreinick authored
Change-Id: If268b42b32e6bcd6e7913bffa6e493dc78af40aa Reviewed-on: https://go-review.googlesource.com/36539 TryBot-Result: Gobot Gobot <gobot@golang.org> Run-TryBot: Heschi Kreinick <heschi@google.com> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Lynn Boger authored
This adds more information to the pkg stale reason for debugging purposes. Change-Id: I7b626db4520baa1127195ae859f4da9b49304636 Reviewed-on: https://go-review.googlesource.com/36944Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Ian Lance Taylor authored
This changes the os package to use the runtime poller for file I/O where possible. When a system call blocks on a pollable descriptor, the goroutine will be blocked on the poller but the thread will be released to run other goroutines. When using a non-pollable descriptor, the os package will continue to use thread-blocking system calls as before. For example, on GNU/Linux, the runtime poller uses epoll. epoll does not support ordinary disk files, so they will continue to use blocking I/O as before. The poller will be used for pipes. Since this means that the poller is used for many more programs, this modifies the runtime to only block waiting for the poller if there is some goroutine that is waiting on the poller. Otherwise, there is no point, as the poller will never make any goroutine ready. This preserves the runtime's current simple deadlock detection. This seems to crash FreeBSD systems, so it is disabled on FreeBSD. This is issue 19093. Using the poller on Windows requires opening the file with FILE_FLAG_OVERLAPPED. We should only do that if we can remove that flag if the program calls the Fd method. This is issue 19098. Update #6817. Update #7903. Update #15021. Update #18507. Update #19093. Update #19098. Change-Id: Ia5197dcefa7c6fbcca97d19a6f8621b2abcbb1fe Reviewed-on: https://go-review.googlesource.com/36800 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Dave Cheney authored
Change-Id: Ic2b20c8238ff0ca5513d32e54ef2945fa4d0c3d2 Reviewed-on: https://go-review.googlesource.com/37033 Run-TryBot: Dave Cheney <dave@cheney.net> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Marcel van Lohuizen authored
Fixes golang/go#18815. Change-Id: Ic9d5cb640a555c58baedd597ed4ca5dd9f275c97 Reviewed-on: https://go-review.googlesource.com/36990 Run-TryBot: Marcel van Lohuizen <mpvl@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Robert Griesemer authored
- ignore them, if they don't. - added tests Fixes #18393. Change-Id: I13f87b81ac6b9138ab5031bb3dd6bebc4c548156 Reviewed-on: https://go-review.googlesource.com/37020 Run-TryBot: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Alex Brainman authored
We did not create it. We should not delete it. Change-Id: If98454ab233ce25367e11a7c68d31b49074537dd Reviewed-on: https://go-review.googlesource.com/37030Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
Fixes #18231. Change-Id: If1615da4db0e6f0516369a1dc37340d80c78f237 Reviewed-on: https://go-review.googlesource.com/37018Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Robert Griesemer authored
Until now, the parser set the position for each Node to the position of the first token belonging to that node. For compatibility with the now defunct gc parser, in many places that position information was modified when the gcCompat flag was set (which it was, by default). Furthermore, in some places, position information was not set at all. This change removes the gcCompat flag and all associated code, and sets position information for all nodes in a more principled way, as proposed by mdempsky (see #16943 for details). Specifically, the position of a node may not be at the very beginning of the respective production. For instance for an Operation `a + b`, the position associated with the node is the position of the `+`. Thus, for `a + b + c` we now get different positions for the two additions. This change does not pass toolstash -cmp because position information recorded in export data and pcline tables is different. There are no other functional changes. Added test suite testing the position of all nodes. Fixes #16943. Change-Id: I3fc02bf096bc3b3d7d2fa655dfd4714a1a0eb90c Reviewed-on: https://go-review.googlesource.com/37017 Run-TryBot: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
- 14 Feb, 2017 3 commits
-
-
Daniel Martí authored
Change-Id: I280c53be455f2fe0474ad577c0f7b7908a4eccb2 Reviewed-on: https://go-review.googlesource.com/36993Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Russ Cox authored
The new tests in this CL have been checked against Go 1.7 as well and all pass in Go 1.7, with the one exception noted in a comment (an intentional change to omitempty already present before this CL). CL 15684 made the intentional change to omitempty. This CL fixes bugs introduced along the way. Most of these are corner cases that are arguably not that important, but they've always worked all the way back to Go 1, and someone cared enough to file #19063. The most significant problem found while adding tests is that in the case of a nil *string field with `xml:",chardata"`, the existing code silently stops processing not just that field but the entire remainder of the struct. Even if #19063 were not worth fixing, this chardata bug would be. Fixes #19063. Change-Id: I318cf8f9945e1a4615982d9904e109fde577ebf9 Reviewed-on: https://go-review.googlesource.com/36954 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Daniel Martí <mvdan@mvdan.cc> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Bryan C. Mills authored
These are possible use-cases for sync.Map. Updates golang/go#18177 Change-Id: I5e2a3d1249967c37d3f89a41122bf4a90522db11 Reviewed-on: https://go-review.googlesource.com/36964Reviewed-by: Ian Lance Taylor <iant@golang.org>
-