- 30 Oct, 2017 8 commits
-
-
Austin Clements authored
This CL implements the compiler support for calling the buffered write barrier added by the previous CL. Since the buffered write barrier is only implemented on amd64 right now, this still supports the old, eager write barrier as well. There's little overhead to supporting both and this way a few tests in test/fixedbugs that expect to have liveness maps at write barrier calls can easily opt-in to the old, eager barrier. This significantly improves the performance of the write barrier: name old time/op new time/op delta WriteBarrier-12 73.5ns ±20% 19.2ns ±27% -73.90% (p=0.000 n=19+18) It also reduces the size of binaries because the write barrier call is more compact: name old object-bytes new object-bytes delta Template 398k ± 0% 393k ± 0% -1.14% (p=0.008 n=5+5) Unicode 208k ± 0% 206k ± 0% -1.00% (p=0.008 n=5+5) GoTypes 1.18M ± 0% 1.15M ± 0% -2.00% (p=0.008 n=5+5) Compiler 4.05M ± 0% 3.88M ± 0% -4.26% (p=0.008 n=5+5) SSA 8.25M ± 0% 8.11M ± 0% -1.59% (p=0.008 n=5+5) Flate 228k ± 0% 224k ± 0% -1.83% (p=0.008 n=5+5) GoParser 295k ± 0% 284k ± 0% -3.62% (p=0.008 n=5+5) Reflect 1.00M ± 0% 0.99M ± 0% -0.70% (p=0.008 n=5+5) Tar 339k ± 0% 333k ± 0% -1.67% (p=0.008 n=5+5) XML 404k ± 0% 395k ± 0% -2.10% (p=0.008 n=5+5) [Geo mean] 704k 690k -2.00% name old exe-bytes new exe-bytes delta HelloSize 1.05M ± 0% 1.04M ± 0% -1.55% (p=0.008 n=5+5) https://perf.golang.org/search?q=upload:20171027.1 (Amusingly, this also reduces compiler allocations by 0.75%, which, combined with the better write barrier, speeds up the compiler overall by 2.10%. See the perf link.) It slightly improves the performance of most of the go1 benchmarks and improves the performance of the x/benchmarks: name old time/op new time/op delta BinaryTree17-12 2.40s ± 1% 2.47s ± 1% +2.69% (p=0.000 n=19+19) Fannkuch11-12 2.95s ± 0% 2.95s ± 0% +0.21% (p=0.000 n=20+19) FmtFprintfEmpty-12 41.8ns ± 4% 41.4ns ± 2% -1.03% (p=0.014 n=20+20) FmtFprintfString-12 68.7ns ± 2% 67.5ns ± 1% -1.75% (p=0.000 n=20+17) FmtFprintfInt-12 79.0ns ± 3% 77.1ns ± 1% -2.40% (p=0.000 n=19+17) FmtFprintfIntInt-12 127ns ± 1% 123ns ± 3% -3.42% (p=0.000 n=20+20) FmtFprintfPrefixedInt-12 152ns ± 1% 150ns ± 1% -1.02% (p=0.000 n=18+17) FmtFprintfFloat-12 211ns ± 1% 209ns ± 0% -0.99% (p=0.000 n=20+16) FmtManyArgs-12 500ns ± 0% 496ns ± 0% -0.73% (p=0.000 n=17+20) GobDecode-12 6.44ms ± 1% 6.53ms ± 0% +1.28% (p=0.000 n=20+19) GobEncode-12 5.46ms ± 0% 5.46ms ± 1% ~ (p=0.550 n=19+20) Gzip-12 220ms ± 1% 216ms ± 0% -1.75% (p=0.000 n=19+19) Gunzip-12 38.8ms ± 0% 38.6ms ± 0% -0.30% (p=0.000 n=18+19) HTTPClientServer-12 79.0µs ± 1% 78.2µs ± 1% -1.01% (p=0.000 n=20+20) JSONEncode-12 11.9ms ± 0% 11.9ms ± 0% -0.29% (p=0.000 n=20+19) JSONDecode-12 52.6ms ± 0% 52.2ms ± 0% -0.68% (p=0.000 n=19+20) Mandelbrot200-12 3.69ms ± 0% 3.68ms ± 0% -0.36% (p=0.000 n=20+20) GoParse-12 3.13ms ± 1% 3.18ms ± 1% +1.67% (p=0.000 n=19+20) RegexpMatchEasy0_32-12 73.2ns ± 1% 72.3ns ± 1% -1.19% (p=0.000 n=19+18) RegexpMatchEasy0_1K-12 241ns ± 0% 239ns ± 0% -0.83% (p=0.000 n=17+16) RegexpMatchEasy1_32-12 68.6ns ± 1% 69.0ns ± 1% +0.47% (p=0.015 n=18+16) RegexpMatchEasy1_1K-12 364ns ± 0% 361ns ± 0% -0.67% (p=0.000 n=16+17) RegexpMatchMedium_32-12 104ns ± 1% 103ns ± 1% -0.79% (p=0.001 n=20+15) RegexpMatchMedium_1K-12 33.8µs ± 3% 34.0µs ± 2% ~ (p=0.267 n=20+19) RegexpMatchHard_32-12 1.64µs ± 1% 1.62µs ± 2% -1.25% (p=0.000 n=19+18) RegexpMatchHard_1K-12 49.2µs ± 0% 48.7µs ± 1% -0.93% (p=0.000 n=19+18) Revcomp-12 391ms ± 5% 396ms ± 7% ~ (p=0.154 n=19+19) Template-12 63.1ms ± 0% 59.5ms ± 0% -5.76% (p=0.000 n=18+19) TimeParse-12 307ns ± 0% 306ns ± 0% -0.39% (p=0.000 n=19+17) TimeFormat-12 325ns ± 0% 323ns ± 0% -0.50% (p=0.000 n=19+19) [Geo mean] 47.3µs 46.9µs -0.67% https://perf.golang.org/search?q=upload:20171026.1 name old time/op new time/op delta Garbage/benchmem-MB=64-12 2.25ms ± 1% 2.20ms ± 1% -2.31% (p=0.000 n=18+18) HTTP-12 12.6µs ± 0% 12.6µs ± 0% -0.72% (p=0.000 n=18+17) JSON-12 11.0ms ± 0% 11.0ms ± 1% -0.68% (p=0.000 n=17+19) https://perf.golang.org/search?q=upload:20171026.2 Updates #14951. Updates #22460. Change-Id: Id4c0932890a1d41020071bec73b8522b1367d3e7 Reviewed-on: https://go-review.googlesource.com/73712 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
This implements runtime support for buffered write barriers on amd64. The buffered write barrier has a fast path that simply enqueues pointers in a per-P buffer. Unlike the current write barrier, this fast path is *not* a normal Go call and does not require the compiler to spill general-purpose registers or put arguments on the stack. When the buffer fills up, the write barrier takes the slow path, which spills all general purpose registers and flushes the buffer. We don't allow safe-points or stack splits while this frame is active, so it doesn't matter that we have no type information for the spilled registers in this frame. One minor complication is cgocheck=2 mode, which uses the write barrier to detect Go pointers being written to non-Go memory. We obviously can't buffer this, so instead we set the buffer to its minimum size, forcing the write barrier into the slow path on every call. For this specific case, we pass additional information as arguments to the flush function. This also requires enabling the cgo write barrier slightly later during runtime initialization, after Ps (and the per-P write barrier buffers) have been initialized. The code in this CL is not yet active. The next CL will modify the compiler to generate calls to the new write barrier. This reduces the average cost of the write barrier by roughly a factor of 4, which will pay for the cost of having it enabled more of the time after we make the GC pacer less aggressive. (Benchmarks will be in the next CL.) Updates #14951. Updates #22460. Change-Id: I396b5b0e2c5e5c4acfd761a3235fd15abadc6cb1 Reviewed-on: https://go-review.googlesource.com/73711 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
For #22460. Change-Id: I798f26d45bbe1efd16b632e201413cb26cb3e6c7 Reviewed-on: https://go-review.googlesource.com/73811 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently systemstack always calls its argument, even if we're already on the system stack. Unfortunately, traceback with _TraceJump stops at the first systemstack it sees, which often cuts off runtime stacks early in profiles. Fix this by performing a tail call if we're already on the system stack. This eliminates it from the traceback entirely, so it won't stop prematurely (or all get mushed into a single node in the profile graph). Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4 Reviewed-on: https://go-review.googlesource.com/74050Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Russ Cox authored
The upcoming CL 73212 will see through mtime modifications. Change the underlying file too. Change-Id: Ib23b4136a62ee87bce408b76bb0385451ae7dcd2 Reviewed-on: https://go-review.googlesource.com/74130 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Lynn Boger authored
This adds support for math Abs, Copysign to be instrinsics on ppc64x. New instruction FCPSGN is added to generate fcpsgn. Some new rules are added to improve the int<->float conversions that are generated mainly due to the Float64bits and Float64frombits in the math package. PPC64.rules is also modified as suggested in the review for CL 63290. Improvements: benchmark old ns/op new ns/op delta BenchmarkAbs-16 1.12 0.69 -38.39% BenchmarkCopysign-16 1.30 0.93 -28.46% BenchmarkNextafter32-16 9.34 8.05 -13.81% BenchmarkFrexp-16 8.81 7.60 -13.73% Others that used Copysign also saw smaller improvements. I attempted to make this work using rules since that seems to be preferred, but due to the use of Float64bits and Float64frombits in these functions, several rules had to be added and even then not all cases were matched. Using rules became too complicated and seemed too fragile for these. Updates #21390 Change-Id: Ia265da9a18355e08000818a4fba1a40e9e031995 Reviewed-on: https://go-review.googlesource.com/67130 Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Lynn Boger authored
Errors occur in runtime test testCgoPprofPIE when the test is built by passing -pie to the external linker with code that was not built as PIC. This occurs on ppc64le because non-PIC is the default, and fails only on newer distros where the address range used for programs is high enough to cause relocation overflow. This test should be built with -buildmode=pie since that correctly generates PIC with -pie. Related issues are #21954 and #22126. Updates #22459 Change-Id: Ib641440bc9f94ad2b97efcda14a4b482647be8f7 Reviewed-on: https://go-review.googlesource.com/73970 Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Hugues Bruant authored
This pragma is not actually honored by the compiler. The tests implicitly relied on the inliner being unable to inline closures with captured variables, which will soon change. Fixes #22208 Change-Id: I13abc9c930b9156d43ec216f8efb768952a29439 Reviewed-on: https://go-review.googlesource.com/73211Reviewed-by: Michael Munday <mike.munday@ibm.com>
-
- 29 Oct, 2017 12 commits
-
-
Russ Cox authored
If runtime.GOROOT() and the os.Executable method for finding GOROOT find the same directory but with different spellings, prefer the spelling returned by runtime.GOROOT(). This avoids an inconsistency if "pwd" returns one spelling but a different spelling is used in $PATH (and therefore in os.Executable()). make.bash runs with GOROOT=$(cd .. && pwd); the goal is to allow the resulting toolchain to use that default setting (unless moved) even if the directory spelling is different in $PATH. Change-Id: If96b28b9e8697f4888f153a400b40bbf58a9128b Reviewed-on: https://go-review.googlesource.com/74250 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Austin Clements authored
recordspan has two remaining write barriers from writing to the pointer to the backing store of h.allspans. However, h.allspans is always backed by off-heap memory, so let the compiler know this. Unfortunately, this isn't quite as clean as most go:notinheap uses because we can't directly name the backing store of a slice, but we can get it done with some judicious casting. For #22460. Change-Id: I296f92fa41cf2cb6ae572b35749af23967533877 Reviewed-on: https://go-review.googlesource.com/73414Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently copy and append for types containing only scalars and notinheap pointers still get compiled to have write barriers, even though those write barriers are unnecessary. Fix these to use HasHeapPointer instead of just Haspointer so that they elide write barriers when possible. This fixes the unnecessary write barrier in runtime.recordspan when it grows the h.allspans slice. This is important because recordspan gets called (*very* indirectly) from (*gcWork).tryGet, which is go:nowritebarrierrec. Unfortunately, the compiler's analysis has no hope of seeing this because it goes through the indirect call fixalloc.first, but I saw it happen. Change-Id: Ieba3abc555a45f573705eab780debcfe5c4f5dd1 Reviewed-on: https://go-review.googlesource.com/73413 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Austin Clements authored
Currently (*Type).HasHeapPointer only ignores pointers go:notinheap types if the type itself is a pointer to a go:notinheap type. However, if it's some other type that contains pointers where all of those pointers are go:notinheap, it will conservatively return true. As a result, we'll use write barriers where they aren't needed, for example calling typedmemmove instead of just memmove on structs that contain only go:notinheap pointers. Fix this by making HasHeapPointer walk the whole type looking for pointers that aren't marked go:notinheap. Change-Id: Ib8c6abf6f7a20f34969d1d402c5498e0b990be59 Reviewed-on: https://go-review.googlesource.com/73412 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Austin Clements authored
Most write barrier calls are inserted by SSA, but copy and append are lowered to runtime.typedslicecopy during walk. Fix these to set Func.WBPos and emit the "write barrier" warning, as done for the write barriers inserted by SSA. As part of this, we refactor setting WBPos and emitting this warning into the frontend so it can be shared by both walk and SSA. Change-Id: I5fe9997d9bdb55e03e01dd58aee28908c35f606b Reviewed-on: https://go-review.googlesource.com/73411 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Adam Langley authored
The crypto.Signer interface takes pre-hased messages for ECDSA and RSA, but the argument in the implementations was called “msg”, not “digest”, which is confusing. This change renames them to help clarify the intended use. Change-Id: Ie2fb8753ca5280e493810d211c7c66223f94af88 Reviewed-on: https://go-review.googlesource.com/70950Reviewed-by: Filippo Valsorda <hi@filippo.io>
-
Austin Clements authored
The current go:nowritebarrierrec checker has two problems that limit its coverage: 1. It doesn't understand that systemstack calls its argument, which means there are several cases where we fail to detect prohibited write barriers. 2. It only observes calls in the AST, so calls constructed during lowering by SSA aren't followed. This CL completely rewrites this checker to address these issues. The current checker runs entirely after walk and uses visitBottomUp, which introduces several problems for checking across systemstack. First, visitBottomUp itself doesn't understand systemstack calls, so the callee may be ordered after the caller, causing the checker to fail to propagate constraints. Second, many systemstack calls are passed a closure, which is quite difficult to resolve back to the function definition after transformclosure and walk have run. Third, visitBottomUp works exclusively on the AST, so it can't observe calls created by SSA. To address these problems, this commit splits the check into two phases and rewrites it to use a call graph generated during SSA lowering. The first phase runs before transformclosure/walk and simply records systemstack arguments when they're easy to get. Then, it modifies genssa to record static call edges at the point where we're lowering to Progs (which is the latest point at which position information is conveniently available). Finally, the second phase runs after all functions have been lowered and uses a direct BFS walk of the call graph (combining systemstack calls with static calls) to find prohibited write barriers and construct nice error messages. Fixes #22384. For #22460. Change-Id: I39668f7f2366ab3c1ab1a71eaf25484d25349540 Reviewed-on: https://go-review.googlesource.com/72773 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
-
Austin Clements authored
We're about to start tracking nowritebarrierrec through systemstack calls, which detects that we're calling markroot (which has write barriers) from gchelper, which is called from the scheduler during STW apparently without a P. But it turns out that func helpgc, which wakes up blocked Ms to run gchelper, installs a P for gchelper to use. This means there *is* a P when gchelper runs, so it is allowed to have write barriers. Tell the compiler this by marking gchelper go:yeswritebarrierrec. Also, document the call to gchelper so I don't have to spend another half a day puzzling over how on earth this could possibly work before discovering the spooky action-at-a-distance in helpgc. Updates #22384. For #22460. Change-Id: I7394c9b4871745575f87a2d4fbbc5b8e54d669f7 Reviewed-on: https://go-review.googlesource.com/72772 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
We're about to start tracking nowritebarrierrec through systemstack calls, which will reveal write barriers in persistentalloc prohibited by various callers. The pointers manipulated by persistentalloc are always to off-heap memory, so this removes these write barriers statically by introducing a new go:notinheap type to represent generic off-heap memory. Updates #22384. For #22460. Change-Id: Id449d9ebf145b14d55476a833e7f076b0d261d57 Reviewed-on: https://go-review.googlesource.com/72771 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
We're about to start tracking nowritebarrierrec through systemstack calls, which will reveal write barriers in startpanic_m prohibited by various callers. We actually can allow write barriers here because the write barrier is a no-op when we're panicking. Let the compiler know. Updates #22384. For #22460. Change-Id: Ifb3a38d3dd9a4125c278c3680f8648f987a5b0b8 Reviewed-on: https://go-review.googlesource.com/72770 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently most of these are marked go:nowritebarrier as a hint, but it's actually important that these not invoke write barriers recursively. The danger is that some gcWork method would invoke the write barrier while the gcWork is in an inconsistent state and that the write barrier would in turn invoke some other gcWork method, which would crash or permanently corrupt the gcWork. Simply marking the write barrier itself as go:nowritebarrierrec isn't sufficient to prevent this if the write barrier doesn't use the outer method. Thankfully, this doesn't cause any build failures, so we were getting this right. :) For #22460. Change-Id: I35a7292a584200eb35a49507cd3fe359ba2206f6 Reviewed-on: https://go-review.googlesource.com/72554 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, newstack and gogo have write barriers for maintaining the context register saved in g.sched.ctxt. This is troublesome, because newstack can be called from go:nowritebarrierrec places that can't allow write barriers. It happens to be benign because g.sched.ctxt will always be nil on entry to newstack *and* it so happens the incoming ctxt will also always be nil in these contexts (I think/hope), but this is playing with fire. It's also desirable to mark newstack go:nowritebarrierrec to prevent any other, non-benign write barriers from creeping in, but we can't do that right now because of this one write barrier. Fix all of this by observing that g.sched.ctxt is really just a saved live pointer register. Hence, we can shade it when we scan g's stack and otherwise move it back and forth between the actual context register and g.sched.ctxt without write barriers. This means we can save it in morestack along with all of the other g.sched, eliminate the save from newstack along with its troublesome write barrier, and eliminate the shenanigans in gogo to invoke the write barrier when restoring it. Once we've done all of this, we can mark newstack go:nowritebarrierrec. Fixes #22385. For #22460. Change-Id: I43c24958e3f6785b53c1350e1e83c2844e0d1522 Reviewed-on: https://go-review.googlesource.com/72553 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
- 28 Oct, 2017 5 commits
-
-
Tobias Klauser authored
In case of a failed/cancelled build, src/cmd/dist/dist might be left in place. Change-Id: Id81b5d663476a880101a2eed54fa051c40b0b0bc Reviewed-on: https://go-review.googlesource.com/74150Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
This reverts commit 4f2ee499. Reason for revert: broke mobile builders. Change-Id: I9fd3ef777ce6401c0c28b03f1dc53ddcdbef5111 Reviewed-on: https://go-review.googlesource.com/74170Reviewed-by: Russ Cox <rsc@golang.org>
-
Alberto Donizetti authored
On linux/amd64, for now. Updates #16706 Change-Id: Ib8c89b6edc73fb88042c06873ff815d387491504 Reviewed-on: https://go-review.googlesource.com/69117 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
Change-Id: Ib342468f3727be0cd6268ab824ad06d783ee7c94 Reviewed-on: https://go-review.googlesource.com/73993Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
For #22224. Change-Id: Iae873fddc72a79a96a32eaeb5d4dd885eaf810cb Reviewed-on: https://go-review.googlesource.com/73851 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 27 Oct, 2017 10 commits
-
-
Russ Cox authored
This is ugly but needed on the builders, because they do not set PWD/GOROOT consistently, and the new content-based staleness understands that the setting of GOROOT influences the content in the linker outputs. Change-Id: I0606f2c70b719619b188864ad3ae1b34432140cb Reviewed-on: https://go-review.googlesource.com/74070 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Joshua Rubin authored
Header.WriteSubset uses a sync.Pool but wouldn't Put the sorter back in the pool if there was an error writing to the io.Writer I'm not really sure why the sorter is returned to begin with. The comment says "for possible return to headerSorterCache". This also doesn't address potential panics that might occur, but the overhead of doing the Put in a defer would likely be too great. Change-Id: If3c45a4c3e11f6ec65d187e25b63455b0142d4e3 Reviewed-on: https://go-review.googlesource.com/73910 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Tom Bergan <tombergan@google.com>
-
Than McIntosh authored
New relocation flavor R_DWARFFILEREF, to be applied to DWARF attribute values that correspond to file references (ex: DW_AT_decl_file, DW_AT_call_file). The LSym for this relocation is the file itself; the linker replaces the relocation target with the index of the specified file in the line table's file section. Note: for testing purposes this patch changes the DWARF function subprogram DIE abbrev to include DW_AT_decl_file (allowed by DWARF but not especially useful) so as to have a way to test this functionality. This attribute will be removed once there are other file reference attributes (coming as part of inlining support). Change-Id: Icf676beb60fcc33f06d78e747ef717532daaa3ba Reviewed-on: https://go-review.googlesource.com/73330 Run-TryBot: Than McIntosh <thanm@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
The compiler depends on the way heap and sort break ties in some cases. Instead of trying to find them all, bundle those packages into the bootstrap compiler builds. The overall goal is that Go1.4 building cmd/compile during the bootstrap process produces a semantically equivalent compiler to cmd/compile compiling itself. After this CL, that property is true, at least for the compiler compiling itself and the other tools. A test for this property will be in CL 73212. Change-Id: Icc1ba7cbe828f5673e8198ebacb18c7c01f3a735 Reviewed-on: https://go-review.googlesource.com/73952 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Russ Cox authored
This will let us build the latest sort when bootstrapping the compiler. The compiler depends on the precise tie-breaks used by sort in some cases, and it's easier to bring sort along than require checking every sort call ever added to the compiler. Change-Id: Idc622f89aedbb40d848708c76650fc28779d0c3c Reviewed-on: https://go-review.googlesource.com/73951 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Than McIntosh authored
Don't pass -gdwarf-2 to the external linker when external linkage is requested. The Go compiler is now emitting DWARF version 4, so this doesn't seem needed any more. Fixes #22455 Change-Id: Ic4122c55e946619a266430f2d26f06d6803dd232 Reviewed-on: https://go-review.googlesource.com/73672Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Russ Cox authored
env.MkEnv was computing the full gcc command line to report as $GOGCCFLAGS in "go env" output, which meant running gcc (or clang) multiple times to discern which flags are available. We also set $GOGCCFLAGS in the environment, but nothing actually uses that as far as I can tell - it was always intended only for debugging. Move GOGCCFLAGS to env.ExtraEnvVars, which displayed in "go env" output but not set in child processes and not computed nearly as often. The effect is that trivial commands like "go help" or "go env GOARCH" or "go tool -n compile" now run in about 0.01s instead of 0.1s, because they no longer run gcc 4 times each. go test -short cmd/go drops from 81s to 44s (and needs more trimming). The $GOROOT/test suite drops from 92s to 33s, because the number of gcc invocation drops from 13,336 to 0. Overall, all.bash drops from 5m53s to 4m07s wall time. Change-Id: Ia85abc89e1e2bb126b933aff3bf7c5f6c0984cd5 Reviewed-on: https://go-review.googlesource.com/73850 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Tobias Klauser authored
Add godoc comments for Time{spec,val} methods Unix and Nano. Change-Id: I285bbd236af588b30140db7182b05f8b202b5b0b Reviewed-on: https://go-review.googlesource.com/73271 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Tobias Klauser authored
Directly return error instead of assigning to err and then returning. Change-Id: Ie5c466cac70cc6d52ee72ebba3e497e0da8a5797 Reviewed-on: https://go-review.googlesource.com/73531 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Artyom Pervukhin authored
Function sqDiff is called multiple times in the hot path (x, y loops) of drawPaletted from the image/draw package; number of sqDiff calls is between 4×width×height and 4×width×height×len(palette) for each drawPaletted call. Simplify this function by removing arguments comparison and relying instead on signed to unsigned integer conversion rules and properties of unsigned integer values operations guaranteed by the spec: > For unsigned integer values, the operations +, -, *, and << are > computed modulo 2n, where n is the bit width of the unsigned integer's > type. Loosely speaking, these unsigned integer operations discard high > bits upon overflow, and programs may rely on ``wrap around''. image/gif package benchmark that depends on the code updated shows throughput improvements: name old time/op new time/op delta QuantizedEncode-4 788ms ± 2% 468ms ± 9% -40.58% (p=0.000 n=9+10) name old speed new speed delta QuantizedEncode-4 1.56MB/s ± 2% 2.63MB/s ± 8% +68.47% (p=0.000 n=9+10) Closes #22375. Change-Id: Ic9a540e39ceb21e7741d308af1cfbe61b4ac347b Reviewed-on: https://go-review.googlesource.com/72373Reviewed-by: Nigel Tao <nigeltao@golang.org>
-
- 26 Oct, 2017 5 commits
-
-
Ian Lance Taylor authored
Change-Id: I3b388e4ac05ace5b7768ade03df2bee5bcc26ba8 Reviewed-on: https://go-review.googlesource.com/73790 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Than McIntosh <thanm@google.com>
-
isharipo authored
1. Move AXXX constants (A-enumeration) from "a.out.go" to "aenum.go" 2. Move VEX-encoded optabs from "asm6.go" to "vex_optabs.go" Also run "go generate" over aenum.go. This explains diff in "anames.go". Initialization of opindex is split into 2 loops: one for `vexOptab`, second for `optab`. Rationale: when VEX instructions are generated with current structure, asm6.go is modified, which can lead to merge conflicts and larger diffs than desired. Same for a.out.go. This change makes x86avxgen usage possible: https://go-review.googlesource.com/c/arch/+/66972 Change-Id: Id9eefcf5ccf0a89440e5d01bcb80926a8163b41d Reviewed-on: https://go-review.googlesource.com/70630 Run-TryBot: Iskander Sharipov <iskander.sharipov@intel.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Christopher Koch authored
Change-Id: I43b32fbca8761d33927e7d0541bba123df0e0122 Reviewed-on: https://go-review.googlesource.com/73556Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com> Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
TestParallelRWMutexReaders has a non-preemptible loop in it that can deadlock if GC triggers. "Fix" it like we've fixed similar tests. Updates #10958. Change-Id: I13618f522f5ef0c864e7171ad2f655edececacd7 Reviewed-on: https://go-review.googlesource.com/73710 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
This picks up a few changes and should stop the macOS crashes. Fixes #22456. Change-Id: I7e0aae119a5564fcfaa16eeab7422bdd5ff0497b Reviewed-on: https://go-review.googlesource.com/73691 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-