- 04 Nov, 2015 7 commits
-
-
Brad Fitzpatrick authored
Noticed from nacl trybot failures on new tests in https://golang.org/cl/16630 Related earlier fix of mine to nacl's listen code: syscall: fix nacl listener to not accept connections once closed https://go-review.googlesource.com/15940 Perhaps a better fix (in the future?) would be to remove the listener from the map at close, but that didn't seem entirely straightforward last time I looked into it. It's not my code, but it seems that the map entry continues to have a purpose even after Listener close. (?) But given that this code is only really used for running tests and the playground, this seems fine. Change-Id: I43bfedc57c07f215f4d79c18f588d3650687a48f Reviewed-on: https://go-review.googlesource.com/16650 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
David Crawshaw authored
Fix for iOS builder. Change-Id: I5b6c977b187446c848182a9294d5bed6b5f9f6e4 Reviewed-on: https://go-review.googlesource.com/16633 Run-TryBot: David Crawshaw <crawshaw@golang.org> Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
-
unknown authored
Unification of implementation of existing md5.Write function with other implementations (sha1, sha256, sha512). Change-Id: I58ae02d165b17fc221953a5b4b986048b46c0508 Reviewed-on: https://go-review.googlesource.com/16621Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Mohit Agarwal authored
builder.copyFile ensures that the destination is an object file. This wouldn't be true if we are not writing to a regular file and the copy fails. Check if the destination is an object file only if we are writing to a regular file. While removing the file, ensure that it is a regular file so that device files and such aren't removed when running as a user with suggicient privileges. Fixes #12407 Change-Id: Ie86ce9770fa59aa56fc486a5962287859b69db3d Reviewed-on: https://go-review.googlesource.com/16585Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
This introduces a recursive variant of the go:nowritebarrier annotation that prohibits write barriers not only in the annotated function, but in all functions it calls, recursively. The error message gives the shortest call stack from the annotated function to the function containing the prohibited write barrier, including the names of the functions and the line numbers of the calls. To demonstrate the annotation, we apply it to gcmarkwb_m, the write barrier itself. This is a new annotation rather than a modification of the existing go:nowritebarrier annotation because, for better or worse, there are many go:nowritebarrier functions that do call functions with write barriers. In most of these cases this is benign because the annotation was conservative, but it prohibits simply coopting the existing annotation. Change-Id: I225ca483c8f699e8436373ed96349e80ca2c2479 Reviewed-on: https://go-review.googlesource.com/16554Reviewed-by: Keith Randall <khr@golang.org>
-
Ian Lance Taylor authored
The code works without the newline, but it looks funny: func _cgoexp_15afe6549f62_GoFn(a unsafe.Pointer, n int32) { fn := GoFn This adds a newline after the '{'. Change-Id: I6c465abe16f47924426d1b22b91004b3a3586ebd Reviewed-on: https://go-review.googlesource.com/16612 Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Minux Ma <minux@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
The Server's server goroutine was panicing (but recovering) when cleaning up after handling a request. It was pretty harmless (it just closed that one connection and didn't kill the whole process) but it was distracting. Updates #13135 Change-Id: I2a0ce9e8b52c8d364e3f4ce245e05c6f8d62df14 Reviewed-on: https://go-review.googlesource.com/16572 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Andrew Gerrand <adg@golang.org>
-
- 03 Nov, 2015 12 commits
-
-
Ian Lance Taylor authored
The width of the type of an external variable defined with a type literal may not be set when the instrumentation pass is run. There are two cases in the standard library that fail without the call to dowidth: ../../../src/encoding/base32/base32.go:322: constant -1000000000 overflows uintptr ../../../src/encoding/base32/base32.go:329: constant -1000000000 overflows uintptr ../../../src/encoding/json/encode.go:385: constant -1000000000 overflows uintptr ../../../src/encoding/json/encode.go:387: constant -1000000000 overflows uintptr Change-Id: I7c3334f7decdb7488595ffe4090cd262d7334283 Reviewed-on: https://go-review.googlesource.com/16331 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Ian Lance Taylor authored
On older versions of GCC we need to pass a file name before GCC will report an unrecognized option. Fixes #13065. Change-Id: I7ed34c01a006966a446059025f7d10235c649072 Reviewed-on: https://go-review.googlesource.com/16589 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Dmitry Vyukov authored
runtime.free has long gone. Change-Id: I058f69e6481b8fa008e1951c29724731a8a3d081 Reviewed-on: https://go-review.googlesource.com/16593Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently the gcWork abstraction caches a single work buffer. As a result, if a worker is putting and getting pointers right at the boundary of a work buffer, it can flap between work buffers and (potentially significantly) increase contention on the global work buffer lists. This change modifies gcWork to instead cache two work buffers and switch off between them. This introduces one buffers' worth of hysteresis and eliminates the above performance worst case by amortizing the cost of getting or putting a work buffer over at least one buffers' worth of work. In practice, it's difficult to trigger this worst case with reasonably large work buffers. On the garbage benchmark, this reduces the max writes/sec to the global work list from 32K to 25K and the median from 6K to 5K. However, if a workload were to trigger this worst case behavior, it could significantly drive up this contention. This has negligible effects on the go1 benchmarks and slightly speeds up the garbage benchmark. name old time/op new time/op delta XBenchGarbage-12 5.90ms ± 3% 5.83ms ± 4% -1.18% (p=0.011 n=18+18) name old time/op new time/op delta BinaryTree17-12 3.22s ± 4% 3.17s ± 3% -1.57% (p=0.009 n=19+20) Fannkuch11-12 2.44s ± 1% 2.53s ± 4% +3.78% (p=0.000 n=18+19) FmtFprintfEmpty-12 50.2ns ± 2% 50.5ns ± 5% ~ (p=0.631 n=19+20) FmtFprintfString-12 167ns ± 1% 166ns ± 1% ~ (p=0.141 n=20+20) FmtFprintfInt-12 162ns ± 1% 159ns ± 1% -1.80% (p=0.000 n=20+20) FmtFprintfIntInt-12 277ns ± 2% 263ns ± 1% -4.78% (p=0.000 n=20+18) FmtFprintfPrefixedInt-12 240ns ± 1% 232ns ± 2% -3.25% (p=0.000 n=20+20) FmtFprintfFloat-12 311ns ± 1% 315ns ± 2% +1.17% (p=0.000 n=20+20) FmtManyArgs-12 1.05µs ± 2% 1.03µs ± 2% -1.72% (p=0.000 n=20+20) GobDecode-12 8.65ms ± 1% 8.71ms ± 2% +0.68% (p=0.001 n=19+20) GobEncode-12 6.51ms ± 1% 6.54ms ± 1% +0.42% (p=0.047 n=20+19) Gzip-12 318ms ± 2% 315ms ± 2% -1.20% (p=0.000 n=19+19) Gunzip-12 42.2ms ± 2% 42.1ms ± 1% ~ (p=0.667 n=20+19) HTTPClientServer-12 62.5µs ± 1% 62.4µs ± 1% ~ (p=0.110 n=20+18) JSONEncode-12 16.8ms ± 1% 16.8ms ± 2% ~ (p=0.569 n=19+20) JSONDecode-12 60.8ms ± 2% 59.8ms ± 1% -1.69% (p=0.000 n=19+19) Mandelbrot200-12 3.87ms ± 1% 3.85ms ± 0% -0.61% (p=0.001 n=20+17) GoParse-12 3.76ms ± 2% 3.76ms ± 1% ~ (p=0.698 n=20+20) RegexpMatchEasy0_32-12 100ns ± 2% 101ns ± 2% ~ (p=0.065 n=19+20) RegexpMatchEasy0_1K-12 342ns ± 2% 333ns ± 1% -2.82% (p=0.000 n=20+19) RegexpMatchEasy1_32-12 83.3ns ± 2% 83.2ns ± 2% ~ (p=0.692 n=20+19) RegexpMatchEasy1_1K-12 498ns ± 2% 490ns ± 1% -1.52% (p=0.000 n=18+20) RegexpMatchMedium_32-12 131ns ± 2% 131ns ± 2% ~ (p=0.464 n=20+18) RegexpMatchMedium_1K-12 39.3µs ± 2% 39.6µs ± 1% +0.77% (p=0.000 n=18+19) RegexpMatchHard_32-12 2.04µs ± 2% 2.06µs ± 1% +0.69% (p=0.009 n=19+20) RegexpMatchHard_1K-12 61.4µs ± 2% 62.1µs ± 1% +1.21% (p=0.000 n=19+20) Revcomp-12 534ms ± 1% 529ms ± 1% -0.97% (p=0.000 n=19+16) Template-12 70.4ms ± 2% 70.0ms ± 1% ~ (p=0.070 n=19+19) TimeParse-12 359ns ± 3% 344ns ± 1% -4.15% (p=0.000 n=19+19) TimeFormat-12 357ns ± 1% 361ns ± 2% +1.05% (p=0.002 n=20+20) [Geo mean] 62.4µs 62.0µs -0.56% name old speed new speed delta GobDecode-12 88.7MB/s ± 1% 88.1MB/s ± 2% -0.68% (p=0.001 n=19+20) GobEncode-12 118MB/s ± 1% 117MB/s ± 1% -0.42% (p=0.046 n=20+19) Gzip-12 60.9MB/s ± 2% 61.7MB/s ± 2% +1.21% (p=0.000 n=19+19) Gunzip-12 460MB/s ± 2% 461MB/s ± 1% ~ (p=0.661 n=20+19) JSONEncode-12 116MB/s ± 1% 115MB/s ± 2% ~ (p=0.555 n=19+20) JSONDecode-12 31.9MB/s ± 2% 32.5MB/s ± 1% +1.72% (p=0.000 n=19+19) GoParse-12 15.4MB/s ± 2% 15.4MB/s ± 1% ~ (p=0.653 n=20+20) RegexpMatchEasy0_32-12 317MB/s ± 2% 315MB/s ± 2% ~ (p=0.141 n=19+20) RegexpMatchEasy0_1K-12 2.99GB/s ± 2% 3.07GB/s ± 1% +2.86% (p=0.000 n=20+19) RegexpMatchEasy1_32-12 384MB/s ± 2% 385MB/s ± 2% ~ (p=0.672 n=20+19) RegexpMatchEasy1_1K-12 2.06GB/s ± 2% 2.09GB/s ± 1% +1.54% (p=0.000 n=18+20) RegexpMatchMedium_32-12 7.62MB/s ± 2% 7.63MB/s ± 2% ~ (p=0.800 n=20+18) RegexpMatchMedium_1K-12 26.0MB/s ± 1% 25.8MB/s ± 1% -0.77% (p=0.000 n=18+19) RegexpMatchHard_32-12 15.7MB/s ± 2% 15.6MB/s ± 1% -0.69% (p=0.010 n=19+20) RegexpMatchHard_1K-12 16.7MB/s ± 2% 16.5MB/s ± 1% -1.19% (p=0.000 n=19+20) Revcomp-12 476MB/s ± 1% 481MB/s ± 1% +0.97% (p=0.000 n=19+16) Template-12 27.6MB/s ± 2% 27.7MB/s ± 1% ~ (p=0.071 n=19+19) [Geo mean] 99.1MB/s 99.3MB/s +0.27% Change-Id: I68bcbf74ccb716cd5e844a554f67b679135105e6 Reviewed-on: https://go-review.googlesource.com/16042Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Dmitry Vyukov authored
Handling of special records for tiny allocations has two problems: 1. Once we queue a finalizer we mark the object. As the result any subsequent finalizers for the same object will not be queued during this GC cycle. If we have 16 finalizers setup (the worst case), finalization will take 16 GC cycles. This is what caused misbehave of tinyfin.go. The actual flakiness was caused by the fact that fing is asynchronous and don't always run before the check. 2. If a tiny block has both finalizer and profile specials, it is possible that we both queue finalizer, preserve the object live and free the profile record. As the result heap profile can be skewed. Fix both issues by analyzing all special records for a single object at once. Also, make tinyfin test stricter and remove reliance on real time. Also, add a test for the problem 2. Currently heap profile missed about a half of live memory. Fixes #13100 Change-Id: I9ae4dc1c44893724138a4565ca5cae29f2e97544 Reviewed-on: https://go-review.googlesource.com/16591Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
-
Ilya Tocar authored
Currently we have special case for 1-byte strings, This extends this to strings shorter than 32 bytes on amd64. Results (broadwell): name old time/op new time/op delta IndexRune-4 57.4ns ± 0% 57.5ns ± 0% +0.10% (p=0.000 n=20+19) IndexRuneFastPath-4 20.4ns ± 0% 20.4ns ± 0% ~ (all samples are equal) Index-4 21.0ns ± 0% 21.8ns ± 0% +3.81% (p=0.000 n=20+20) LastIndex-4 7.07ns ± 1% 6.98ns ± 0% -1.21% (p=0.000 n=20+16) IndexByte-4 18.3ns ± 0% 18.3ns ± 0% ~ (all samples are equal) IndexHard1-4 1.46ms ± 0% 0.39ms ± 0% -73.06% (p=0.000 n=16+16) IndexHard2-4 1.46ms ± 0% 0.30ms ± 0% -79.55% (p=0.000 n=18+18) IndexHard3-4 1.46ms ± 0% 0.66ms ± 0% -54.68% (p=0.000 n=19+19) LastIndexHard1-4 1.46ms ± 0% 1.46ms ± 0% -0.01% (p=0.036 n=18+20) LastIndexHard2-4 1.46ms ± 0% 1.46ms ± 0% ~ (p=0.588 n=19+19) LastIndexHard3-4 1.46ms ± 0% 1.46ms ± 0% ~ (p=0.283 n=17+20) IndexTorture-4 11.1µs ± 0% 11.1µs ± 0% +0.01% (p=0.000 n=18+17) Change-Id: I892781549f558f698be4e41f9f568e3d0611efb5 Reviewed-on: https://go-review.googlesource.com/16430Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
-
Austin Clements authored
Currently the GC work buffers are only 256 bytes and hence can record only 24 64-bit pointer. They were reduced from 4K in commits db7fd1c1 and a15818fe as a way to minimize the amount of work the per-P workbuf caches could "hide" from the mark phase and carry in to the mark termination phase. However, this approach wasn't very robust and we later added a "mark 2" phase to address this problem head-on. Because of mark 2, there's now no benefit to having very small work buffers. But there are plenty of downsides: small work buffers increase contention on the work lists, increase the frequency and hence net overhead of acquiring and releasing work buffers, and somewhat increase memory overhead of the GC. This commit expands work buffers back to 4K (504 64-bit pointers). This reduces the rate of writes to work.full in the garbage benchmark from a peak of ~780,000 writes/sec to a peak of ~32,000 writes/sec. This has negligible effect on the go1 benchmarks. It slightly slows down the garbage benchmark. name old time/op new time/op delta XBenchGarbage-12 5.37ms ± 5% 5.60ms ± 2% +4.37% (p=0.000 n=20+20) Change-Id: Ic9cc28e7a125d23d9faf4f5e690fb8aa9bcdfb28 Reviewed-on: https://go-review.googlesource.com/15893Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently, assists are non-preemptible, which means a heavily assisting G can block other Gs from running. At the beginning of a GC cycle, it can also delay scang, which will spin until the assist is done. Since scanning is currently done sequentially, this can seriously extend the length of the scan phase. Fix this by making assists preemptible. Since the assist holds work buffers and runs on the system stack, this must be done cooperatively: we make gcDrainN return on preemption, and make the assist return from the system stack and voluntarily Gosched. This is prerequisite to enlarging the work buffers. Without this change, the delays and spinning in scang increase significantly. This has no effect on the go1 benchmarks. name old time/op new time/op delta XBenchGarbage-12 5.72ms ± 4% 5.37ms ± 5% -6.11% (p=0.000 n=20+20) Change-Id: I829e732a0f23b126da633516a1a9ec1a508fdbf1 Reviewed-on: https://go-review.googlesource.com/15894Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
GC assists must block until the assist can be satisfied (either through stealing credit or doing work) or the GC cycle ends. Currently, this is implemented as a retry loop with a 100 µs delay. This obviously isn't ideal, as it wastes CPU and delays mutator execution. It also has the somewhat peculiar downside that sleeping a G requires allocation, and this requires working around recursive allocation. Replace this timed delay with a proper scheduling queue. When an assist can't be satisfied immediately, it adds the allocating G to a queue and parks it. Any time background scan credit is flushed, it consults this queue, directly satisfies the debt of queued assists, and wakes up satisfied assists before flushing any remaining credit to the background credit pool. No effect on the go1 benchmarks. Slightly speeds up the garbage benchmark. name old time/op new time/op delta XBenchGarbage-12 5.81ms ± 1% 5.72ms ± 4% -1.65% (p=0.011 n=20+20) Updates #12041. Change-Id: I8ee3b6274dd097b12b10a8030796a958a4b0e7b7 Reviewed-on: https://go-review.googlesource.com/15890Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
This eliminates many write barriers in the scheduler code that are unnecessary and will interfere with upcoming changes where the garbage collector will have to invoke run queue functions in contexts that must not have write barriers. Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55 Reviewed-on: https://go-review.googlesource.com/16556Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Michael Matloob authored
The error message should indicate the name of the unset variable, rather than the value. The value will alwayse be empty. Change-Id: I6f6c165074dfce857b6523703a890d205423cd28 Reviewed-on: https://go-review.googlesource.com/16555Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Todd Neal authored
Replace various implementations of inlining prevention with "go:noinline" Change-Id: Iac90895c3a62d6f4b7a6c72e11e165d15a0abfa4 Reviewed-on: https://go-review.googlesource.com/16510Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Todd Neal <todd@tneal.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
- 02 Nov, 2015 4 commits
-
-
Michael Hudson-Doyle authored
golang.org/cl/16436 added a local symbol for every global function, but also added a duplicate entry for the global symbol. Surprisingly this hasn't caused any noticeable problems, but it's still wrong. Change-Id: Icd3906760f8aaf7bef31ffd4f2d866d73d36dc2c Reviewed-on: https://go-review.googlesource.com/16581Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Ilya Tocar authored
Use AVX2 if available. Results (haswell), below: name old time/op new time/op delta BytesCompare1-6 11.4ns ± 0% 11.4ns ± 0% ~ (all samples are equal) BytesCompare2-6 11.4ns ± 0% 11.4ns ± 0% ~ (all samples are equal) BytesCompare4-6 11.4ns ± 0% 11.4ns ± 0% ~ (all samples are equal) BytesCompare8-6 9.29ns ± 2% 8.76ns ± 0% -5.72% (p=0.000 n=16+17) BytesCompare16-6 9.29ns ± 2% 9.20ns ± 0% -1.02% (p=0.000 n=20+16) BytesCompare32-6 11.4ns ± 1% 11.4ns ± 0% ~ (p=0.191 n=20+20) BytesCompare64-6 14.4ns ± 0% 13.1ns ± 0% -8.68% (p=0.000 n=20+20) BytesCompare128-6 20.2ns ± 0% 18.5ns ± 0% -8.27% (p=0.000 n=16+20) BytesCompare256-6 29.3ns ± 0% 24.5ns ± 0% -16.38% (p=0.000 n=16+16) BytesCompare512-6 46.8ns ± 0% 37.1ns ± 0% -20.78% (p=0.000 n=18+16) BytesCompare1024-6 82.9ns ± 0% 62.3ns ± 0% -24.86% (p=0.000 n=20+14) BytesCompare2048-6 155ns ± 0% 112ns ± 0% -27.74% (p=0.000 n=20+20) CompareBytesEqual-6 10.1ns ± 1% 10.0ns ± 1% ~ (p=0.527 n=20+20) CompareBytesToNil-6 10.0ns ± 2% 9.4ns ± 0% -6.57% (p=0.000 n=20+17) CompareBytesEmpty-6 8.76ns ± 0% 8.76ns ± 0% ~ (all samples are equal) CompareBytesIdentical-6 8.76ns ± 0% 8.76ns ± 0% ~ (all samples are equal) CompareBytesSameLength-6 10.6ns ± 1% 10.6ns ± 1% ~ (p=0.240 n=20+20) CompareBytesDifferentLength-6 10.6ns ± 0% 10.6ns ± 1% ~ (p=1.000 n=20+20) CompareBytesBigUnaligned-6 132±s ± 1% 105±s ± 1% -20.61% (p=0.000 n=20+18) CompareBytesBig-6 125±s ± 1% 105±s ± 1% -16.31% (p=0.000 n=20+20) CompareBytesBigIdentical-6 8.13ns ± 0% 8.13ns ± 0% ~ (all samples are equal) name old speed new speed delta CompareBytesBigUnaligned-6 7.94GB/s ± 1% 10.01GB/s ± 1% +25.96% (p=0.000 n=20+18) CompareBytesBig-6 8.38GB/s ± 1% 10.01GB/s ± 1% +19.48% (p=0.000 n=20+20) CompareBytesBigIdentical-6 129TB/s ± 0% 129TB/s ± 0% +0.01% (p=0.003 n=17+19) Change-Id: I820f31bab4582dd4204b146bb077c0d2f24cd8f5 Reviewed-on: https://go-review.googlesource.com/16434 Run-TryBot: Ilya Tocar <ilya.tocar@intel.com> Reviewed-by: Klaus Post <klauspost@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
-
David Crawshaw authored
Change-Id: I630ae29ecb39252642883398cc51d49133c6f3d7 Reviewed-on: https://go-review.googlesource.com/16451Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: David Crawshaw <crawshaw@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
* use new(int32) to be pedantic about documented SetFinalizer rules: "The argument x must be a pointer to an object allocated by calling new or by taking the address of a composite literal" * remove the amd64-only restriction. The GC is fully precise everywhere now, even on 32-bit. (keep the gccgo restriction, though) * remove a data race (perhaps the actual bug) and use atomic.LoadInt32 for the final check. The race detector is now happy, too. Updates #13100 Change-Id: I8d05c0ac4f046af9ba05701ad709c57984b34893 Reviewed-on: https://go-review.googlesource.com/16535 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
- 01 Nov, 2015 4 commits
-
-
Ian Lance Taylor authored
Change-Id: I288bd3091ea81db7b616747cbec8958a31d98b7e Reviewed-on: https://go-review.googlesource.com/16532Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Michael Hudson-Doyle authored
To avoid collisions with what existing code may already be doing. Change-Id: Ice639440aafc0724714c25333d90a49954372230 Reviewed-on: https://go-review.googlesource.com/16503Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Mikio Hara authored
This change makes Dial, Listen and ListenPacket with invalid port fail whatever GODEBUG=netdns is. Please be informed that cgoLookupPort with an out of range literal number may return either the lower or upper bound value, 0 or 65535, with no error on some platform. Fixes #11715. Change-Id: I43f9c4fb5526d1bf50b97698e0eb39d29fd74c35 Reviewed-on: https://go-review.googlesource.com/12447Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Alex Brainman authored
Also new tests added. So, perhaps, this CL corrects even more broken EvalSymlinks behaviour. Fixes #12451 Change-Id: I81b9d92bab74bcb8eca6db6633546982fe5cec87 Reviewed-on: https://go-review.googlesource.com/16192Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Alex Brainman <alex.brainman@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
- 31 Oct, 2015 1 commit
-
-
Ian Lance Taylor authored
The GNU binutils recently picked up support for new 386/amd64 relocations. Add support for them in the Go linker when doing an internal link. The 386 relocation R_386_GOT32X was proposed in https://groups.google.com/forum/#!topic/ia32-abi/GbJJskkid4I . It can be treated as identical to the R_386_GOT32 relocation. The amd64 relocations R_X86_64_GOTPCRELX and R_X86_64_REX_GOTPCRELX were proposed in https://groups.google.com/forum/#!topic/x86-64-abi/n9AWHogmVY0 . They can both be treated as identical to the R_X86_64_GOTPCREL relocation. The purpose of the new relocations is to permit additional linker relaxations in some cases. We do not attempt to support those cases. While we're at it, remove the unused and in some cases out of date _COUNT names from ld/elf.go. Fixes #13114. Change-Id: I34ef07f6fcd00cdd2996038ecf46bb77a49e968b Reviewed-on: https://go-review.googlesource.com/16529Reviewed-by: Minux Ma <minux@golang.org>
-
- 30 Oct, 2015 12 commits
-
-
Matthew Dempsky authored
The code is meant to return "<nil>", but because of a make([]Type, 8) call that should be make([]Type, 0, 8), the nil Type happens to already appear in the array. Change-Id: I2db140046e52f27db1b0ac84bde2b6680677dd95 Reviewed-on: https://go-review.googlesource.com/16464 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Matthew Dempsky authored
Noticed by cmd/vet. Expected values array produced by Python instead of Keisan because: 1) Keisan's website calculator is painfully difficult to copy/paste values into and out of, and 2) after tediously computing e^(vf[i] * 10) - 1 via Keisan I discovered that Keisan computing vf[i]*10 in a higher precision was giving substantially different output values. Also, testing uses "close" instead of "veryclose" because 386's assembly implementation produces values for some of the test cases that fail "veryclose". Curiously, Expm1(vf[i]*10) is identical to Exp(vf[i]*10)-1 on 386, whereas with the portable implementation they're only "veryclose". Investigating these questions is left to someone else. I just wanted to fix the cmd/vet warning. Fixes #13101. Change-Id: Ica8f6c267d01aa4cc31f53593e95812746942fbc Reviewed-on: https://go-review.googlesource.com/16505 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Klaus Post <klauspost@gmail.com> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Austin Clements authored
This moves another root scanning task out of the GC coordinator and parallelizes it on the GC workers. This has negligible effect on the go1 benchmarks and the garbage benchmark. name old time/op new time/op delta XBenchGarbage-12 5.24ms ± 1% 5.26ms ± 1% +0.30% (p=0.007 n=18+17) name old time/op new time/op delta BinaryTree17-12 3.20s ± 5% 3.21s ± 5% ~ (p=0.264 n=20+18) Fannkuch11-12 2.46s ± 1% 2.54s ± 2% +3.09% (p=0.000 n=18+20) FmtFprintfEmpty-12 49.9ns ± 4% 50.0ns ± 5% ~ (p=0.356 n=20+20) FmtFprintfString-12 170ns ± 1% 170ns ± 2% ~ (p=0.815 n=19+20) FmtFprintfInt-12 160ns ± 1% 159ns ± 1% -0.63% (p=0.003 n=18+19) FmtFprintfIntInt-12 270ns ± 1% 267ns ± 1% -1.00% (p=0.000 n=19+18) FmtFprintfPrefixedInt-12 238ns ± 1% 232ns ± 1% -2.28% (p=0.000 n=19+19) FmtFprintfFloat-12 310ns ± 2% 313ns ± 2% +0.93% (p=0.000 n=19+19) FmtManyArgs-12 1.06µs ± 1% 1.04µs ± 1% -1.93% (p=0.000 n=20+19) GobDecode-12 8.63ms ± 1% 8.70ms ± 1% +0.81% (p=0.001 n=20+19) GobEncode-12 6.52ms ± 1% 6.56ms ± 1% +0.66% (p=0.000 n=20+19) Gzip-12 318ms ± 1% 319ms ± 1% ~ (p=0.405 n=17+18) Gunzip-12 42.1ms ± 2% 42.0ms ± 1% ~ (p=0.771 n=20+19) HTTPClientServer-12 62.6µs ± 1% 62.9µs ± 1% +0.41% (p=0.038 n=20+20) JSONEncode-12 16.9ms ± 1% 16.9ms ± 1% ~ (p=0.077 n=18+20) JSONDecode-12 60.7ms ± 1% 62.3ms ± 1% +2.73% (p=0.000 n=20+20) Mandelbrot200-12 3.86ms ± 1% 3.85ms ± 1% ~ (p=0.084 n=19+20) GoParse-12 3.75ms ± 2% 3.73ms ± 1% ~ (p=0.107 n=20+19) RegexpMatchEasy0_32-12 100ns ± 2% 101ns ± 2% +0.97% (p=0.001 n=20+19) RegexpMatchEasy0_1K-12 342ns ± 2% 332ns ± 2% -2.86% (p=0.000 n=19+19) RegexpMatchEasy1_32-12 83.2ns ± 2% 82.8ns ± 2% ~ (p=0.108 n=19+20) RegexpMatchEasy1_1K-12 495ns ± 2% 490ns ± 2% -1.04% (p=0.000 n=18+19) RegexpMatchMedium_32-12 130ns ± 2% 131ns ± 2% ~ (p=0.291 n=20+20) RegexpMatchMedium_1K-12 39.3µs ± 1% 39.9µs ± 1% +1.54% (p=0.000 n=18+20) RegexpMatchHard_32-12 2.02µs ± 1% 2.05µs ± 2% +1.19% (p=0.000 n=19+19) RegexpMatchHard_1K-12 60.9µs ± 1% 61.5µs ± 1% +0.99% (p=0.000 n=18+18) Revcomp-12 535ms ± 1% 531ms ± 1% -0.82% (p=0.000 n=17+17) Template-12 73.0ms ± 1% 74.1ms ± 1% +1.47% (p=0.000 n=20+20) TimeParse-12 356ns ± 2% 348ns ± 1% -2.30% (p=0.000 n=20+20) TimeFormat-12 347ns ± 1% 353ns ± 1% +1.68% (p=0.000 n=19+20) [Geo mean] 62.3µs 62.4µs +0.12% name old speed new speed delta GobDecode-12 88.9MB/s ± 1% 88.2MB/s ± 1% -0.81% (p=0.001 n=20+19) GobEncode-12 118MB/s ± 1% 117MB/s ± 1% -0.66% (p=0.000 n=20+19) Gzip-12 60.9MB/s ± 1% 60.8MB/s ± 1% ~ (p=0.409 n=17+18) Gunzip-12 461MB/s ± 2% 462MB/s ± 1% ~ (p=0.765 n=20+19) JSONEncode-12 115MB/s ± 1% 115MB/s ± 1% ~ (p=0.078 n=18+20) JSONDecode-12 32.0MB/s ± 1% 31.1MB/s ± 1% -2.65% (p=0.000 n=20+20) GoParse-12 15.5MB/s ± 2% 15.5MB/s ± 1% ~ (p=0.111 n=20+19) RegexpMatchEasy0_32-12 318MB/s ± 2% 314MB/s ± 2% -1.27% (p=0.000 n=20+19) RegexpMatchEasy0_1K-12 2.99GB/s ± 1% 3.08GB/s ± 2% +2.94% (p=0.000 n=19+19) RegexpMatchEasy1_32-12 385MB/s ± 2% 386MB/s ± 2% ~ (p=0.105 n=19+20) RegexpMatchEasy1_1K-12 2.07GB/s ± 1% 2.09GB/s ± 2% +1.06% (p=0.000 n=18+19) RegexpMatchMedium_32-12 7.64MB/s ± 2% 7.61MB/s ± 1% ~ (p=0.179 n=20+20) RegexpMatchMedium_1K-12 26.1MB/s ± 1% 25.7MB/s ± 1% -1.52% (p=0.000 n=18+20) RegexpMatchHard_32-12 15.8MB/s ± 1% 15.6MB/s ± 2% -1.18% (p=0.000 n=19+19) RegexpMatchHard_1K-12 16.8MB/s ± 2% 16.6MB/s ± 1% -0.90% (p=0.000 n=19+18) Revcomp-12 475MB/s ± 1% 479MB/s ± 1% +0.83% (p=0.000 n=17+17) Template-12 26.6MB/s ± 1% 26.2MB/s ± 1% -1.45% (p=0.000 n=20+20) [Geo mean] 99.0MB/s 98.7MB/s -0.32% Change-Id: I6ea44d7a59aaa6851c64695277ab65645ff9d32e Reviewed-on: https://go-review.googlesource.com/16070Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently the concurrent root scan is performed in its entirety by the GC coordinator before entering concurrent mark (which enables GC workers). This scan is done sequentially, which can prolong the scan phase, delay the mark phase, and means that the scan phase does not obey the 25% CPU goal. Furthermore, there's no need to complete the root scan before starting marking (in fact, we already allow GC assists to happen during the scan phase), so this acts as an unnecessary barrier between root scanning and marking. This change shifts the root scan work out of the GC coordinator and in to the GC workers. The coordinator simply sets up the scan state and enqueues the right number of root scan jobs. The GC workers then drain the root scan jobs prior to draining heap scan jobs. This parallelizes the root scan process, makes it obey the 25% CPU goal, and effectively eliminates root scanning as an isolated phase, allowing the system to smoothly transition from root scanning to heap marking. This also eliminates a major non-STW responsibility of the GC coordinator, which will make it easier to switch to a decentralized state machine. Finally, it puts us in a good position to perform root scanning in assists as well, which will help satisfy assists at the beginning of the GC cycle. This is mostly straightforward. One tricky aspect is that we have to deal with preemption deadlock: where two non-preemptible gorountines are trying to preempt each other to perform a stack scan. Given the context where this happens, the only instance of this is two background workers trying to scan each other. We avoid this by simply not scanning the stacks of background workers during the concurrent phase; this is safe because we'll scan them during mark termination (and their stacks are *very* small and should not contain any new pointers). This change also switches the root marking during mark termination to use the same gcDrain-based code path as concurrent mark. This shouldn't affect performance because STW root marking was already parallel and tasks switched to heap marking immediately when no more root marking tasks were available. However, it simplifies the code and unifies these code paths. This has negligible effect on the go1 benchmarks. It slightly slows down the garbage benchmark, possibly by making GC run slightly more frequently. name old time/op new time/op delta XBenchGarbage-12 5.10ms ± 1% 5.24ms ± 1% +2.87% (p=0.000 n=18+18) name old time/op new time/op delta BinaryTree17-12 3.25s ± 3% 3.20s ± 5% -1.57% (p=0.013 n=20+20) Fannkuch11-12 2.45s ± 1% 2.46s ± 1% +0.38% (p=0.019 n=20+18) FmtFprintfEmpty-12 49.7ns ± 3% 49.9ns ± 4% ~ (p=0.851 n=19+20) FmtFprintfString-12 170ns ± 2% 170ns ± 1% ~ (p=0.775 n=20+19) FmtFprintfInt-12 161ns ± 1% 160ns ± 1% -0.78% (p=0.000 n=19+18) FmtFprintfIntInt-12 267ns ± 1% 270ns ± 1% +1.04% (p=0.000 n=19+19) FmtFprintfPrefixedInt-12 238ns ± 2% 238ns ± 1% ~ (p=0.133 n=18+19) FmtFprintfFloat-12 311ns ± 1% 310ns ± 2% -0.35% (p=0.023 n=20+19) FmtManyArgs-12 1.08µs ± 1% 1.06µs ± 1% -2.31% (p=0.000 n=20+20) GobDecode-12 8.65ms ± 1% 8.63ms ± 1% ~ (p=0.377 n=18+20) GobEncode-12 6.49ms ± 1% 6.52ms ± 1% +0.37% (p=0.015 n=20+20) Gzip-12 319ms ± 3% 318ms ± 1% ~ (p=0.975 n=19+17) Gunzip-12 41.9ms ± 1% 42.1ms ± 2% +0.65% (p=0.004 n=19+20) HTTPClientServer-12 61.7µs ± 1% 62.6µs ± 1% +1.40% (p=0.000 n=18+20) JSONEncode-12 16.8ms ± 1% 16.9ms ± 1% ~ (p=0.239 n=20+18) JSONDecode-12 58.4ms ± 1% 60.7ms ± 1% +3.85% (p=0.000 n=19+20) Mandelbrot200-12 3.86ms ± 0% 3.86ms ± 1% ~ (p=0.092 n=18+19) GoParse-12 3.75ms ± 2% 3.75ms ± 2% ~ (p=0.708 n=19+20) RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 2% +0.60% (p=0.010 n=17+20) RegexpMatchEasy0_1K-12 341ns ± 1% 342ns ± 2% ~ (p=0.203 n=20+19) RegexpMatchEasy1_32-12 82.5ns ± 2% 83.2ns ± 2% +0.83% (p=0.007 n=19+19) RegexpMatchEasy1_1K-12 495ns ± 1% 495ns ± 2% ~ (p=0.970 n=19+18) RegexpMatchMedium_32-12 130ns ± 2% 130ns ± 2% +0.59% (p=0.039 n=19+20) RegexpMatchMedium_1K-12 39.2µs ± 1% 39.3µs ± 1% ~ (p=0.214 n=18+18) RegexpMatchHard_32-12 2.03µs ± 2% 2.02µs ± 1% ~ (p=0.166 n=18+19) RegexpMatchHard_1K-12 61.0µs ± 1% 60.9µs ± 1% ~ (p=0.169 n=20+18) Revcomp-12 533ms ± 1% 535ms ± 1% ~ (p=0.071 n=19+17) Template-12 68.1ms ± 2% 73.0ms ± 1% +7.26% (p=0.000 n=19+20) TimeParse-12 355ns ± 2% 356ns ± 2% ~ (p=0.530 n=19+20) TimeFormat-12 357ns ± 2% 347ns ± 1% -2.59% (p=0.000 n=20+19) [Geo mean] 62.1µs 62.3µs +0.31% name old speed new speed delta GobDecode-12 88.7MB/s ± 1% 88.9MB/s ± 1% ~ (p=0.377 n=18+20) GobEncode-12 118MB/s ± 1% 118MB/s ± 1% -0.37% (p=0.015 n=20+20) Gzip-12 60.9MB/s ± 3% 60.9MB/s ± 1% ~ (p=0.944 n=19+17) Gunzip-12 464MB/s ± 1% 461MB/s ± 2% -0.64% (p=0.004 n=19+20) JSONEncode-12 115MB/s ± 1% 115MB/s ± 1% ~ (p=0.236 n=20+18) JSONDecode-12 33.2MB/s ± 1% 32.0MB/s ± 1% -3.71% (p=0.000 n=19+20) GoParse-12 15.5MB/s ± 2% 15.5MB/s ± 2% ~ (p=0.702 n=19+20) RegexpMatchEasy0_32-12 320MB/s ± 1% 318MB/s ± 2% ~ (p=0.094 n=18+20) RegexpMatchEasy0_1K-12 3.00GB/s ± 1% 2.99GB/s ± 1% ~ (p=0.194 n=20+19) RegexpMatchEasy1_32-12 388MB/s ± 2% 385MB/s ± 2% -0.83% (p=0.008 n=19+19) RegexpMatchEasy1_1K-12 2.07GB/s ± 1% 2.07GB/s ± 1% ~ (p=0.964 n=19+18) RegexpMatchMedium_32-12 7.68MB/s ± 1% 7.64MB/s ± 2% -0.57% (p=0.020 n=19+20) RegexpMatchMedium_1K-12 26.1MB/s ± 1% 26.1MB/s ± 1% ~ (p=0.211 n=18+18) RegexpMatchHard_32-12 15.8MB/s ± 1% 15.8MB/s ± 1% ~ (p=0.180 n=18+19) RegexpMatchHard_1K-12 16.8MB/s ± 1% 16.8MB/s ± 2% ~ (p=0.236 n=20+19) Revcomp-12 477MB/s ± 1% 475MB/s ± 1% ~ (p=0.071 n=19+17) Template-12 28.5MB/s ± 2% 26.6MB/s ± 1% -6.77% (p=0.000 n=19+20) [Geo mean] 100MB/s 99.0MB/s -0.82% Change-Id: I875bf6ceb306d1ee2f470cabf88aa6ede27c47a0 Reviewed-on: https://go-review.googlesource.com/16059Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
We already have gcMarkWorkAvailable, but the check for GC mark work is open-coded in several places. Generalize gcMarkWorkAvailable slightly and replace these open-coded checks with calls to gcMarkWorkAvailable. In addition to cleaning up the code, this puts us in a better position to make this check slightly more complicated. Change-Id: I1b29883300ecd82a1bf6be193e9b4ee96582a860 Reviewed-on: https://go-review.googlesource.com/16058Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
Flaky tests do more harm than good. Updates #13098 Change-Id: I179ed810b49bbb96c8df462bfa20b70231b26772 Reviewed-on: https://go-review.googlesource.com/16521 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Marvin Stenger authored
Type Op is enfored now. Type EType will need further CLs. Added TODOs where Node.EType is used as a union type. The TODOs have the format `TODO(marvin): Fix Node.EType union type.`. Furthermore: -The flag of Econv function in fmt.go is removed, since unused. -Some cleaning along the way, e.g. declare vars first when getting initialized. Passes go build -toolexec 'toolstash -cmp' -a std. Fixes #11846 Change-Id: I908b955d5a78a195604970983fb9194bd9e9260b Reviewed-on: https://go-review.googlesource.com/14956Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
-
Taru Karttunen authored
Include syscall.Stat_t on unix to the unexported fileStat structure rather than accessing it though an interface. Additionally add a benchmark for Readdir (and Readdirnames). Tested on linux, freebsd, netbsd, openbsd darwin, solaris, does not touch windows stuff. Does not change the API, as discussed on golang-dev. E.g. on linux/amd64 with a directory of 65 files: benchmark old ns/op new ns/op delta BenchmarkReaddir-4 67774 66225 -2.29% benchmark old allocs new allocs delta BenchmarkReaddir-4 334 269 -19.46% benchmark old bytes new bytes delta BenchmarkReaddir-4 25208 24168 -4.13% Change-Id: I44ef72a04ad7055523a980f29aa11122040ae8fe Reviewed-on: https://go-review.googlesource.com/16423Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Russ Cox authored
For CL 16512, #12366, #13107. Change-Id: I0ed1bb9597ac3db3fa35a037c304060d5a7e6d51 Reviewed-on: https://go-review.googlesource.com/16513Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
Abandon (but still support) the old numbering system. GOTRACEBACK=none is old 0 GOTRACEBACK=single is the new behavior GOTRACEBACK=all is old 1 GOTRACEBACK=system is old 2 GOTRACEBACK=crash is unchanged See doc comment change in runtime1.go for details. Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release. If you run into programs where printing only the current goroutine omits needed information, please add details in a comment on that issue. Fixes #12366. Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db Reviewed-on: https://go-review.googlesource.com/16512Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
Avoids Mac firewall box. Change-Id: I000e421fa9639612d636b6fa4baf905459c5aeb2 Reviewed-on: https://go-review.googlesource.com/16514Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
Change-Id: Ibb15282a2baeb58439b085d70b82797d8c71de36 Reviewed-on: https://go-review.googlesource.com/16502Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-