1. 30 Oct, 2017 19 commits
    • Matthew Dempsky's avatar
      cmd/compile: skip compiling wrappers for imported defined types · 03c8c566
      Matthew Dempsky authored
      When compiling a package that defines a type T with method T.M, we
      already compile and emit the wrapper method (*T).M. There's no need
      for every package that uses T to do the same.
      
      Change-Id: I3ca2659029907570f8b98d66111686435fad7ed0
      Reviewed-on: https://go-review.googlesource.com/74412
      Run-TryBot: Matthew Dempsky <mdempsky@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      03c8c566
    • Mark Theunissen's avatar
      net/url: preserve leading slashes when resolving path · 84e91e1d
      Mark Theunissen authored
      When doing resolvePath, if there are multiple leading slashes in the
      target, preserve them. This prevents an issue where the Go http.Client
      cleans up multiple leading slashes in the Location header in a
      redirect, resulting in a redirection to the incorrect target.
      
      Fixes #21158.
      
      Change-Id: I6a21ea61ca3bc7033f3c8a6ccc21ecaa3e996fa8
      Reviewed-on: https://go-review.googlesource.com/51050Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Run-TryBot: Russ Cox <rsc@golang.org>
      84e91e1d
    • David Chase's avatar
      cmd/compile: adjust expectations of test for issue 18902 · b4c3fe7b
      David Chase authored
      The test for #18902 reads the assembly stream to be sure
      that the line number does not change too often (this is an
      indication that debugging the code will be unpleasant and
      that the compiler is probably getting line numbers "wrong").
      
      It checks that it is getting "enough" input, but the
      compiler has gotten enough better since the test was written
      that it now fails for lack of enough input.  The old
      threshould was 200 instructions, the new one is 150 (the
      minimum observed input is on arm64 with 184 instructions).
      
      Fixes #22494.
      
      Change-Id: Ibba7e9ff4ab6a7be369e5dd5859d150b7db94653
      Reviewed-on: https://go-review.googlesource.com/74357
      Run-TryBot: David Chase <drchase@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarAustin Clements <austin@google.com>
      b4c3fe7b
    • Keith Randall's avatar
      cmd/compile: fix runtime.KeepAlive · 0153a413
      Keith Randall authored
      KeepAlive needs to introduce a use of the spill of the
      value it is keeping alive.  Without that, we don't guarantee
      that the spill dominates the KeepAlive.
      
      This bug was probably introduced with the code to move spills
      down to the dominator of the restores, instead of always spilling
      just after the value itself (CL 34822).
      
      Fixes #22458.
      
      Change-Id: I94955a21960448ffdacc4df775fe1213967b1d4c
      Reviewed-on: https://go-review.googlesource.com/74210Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      Reviewed-by: 's avatarDavid Chase <drchase@google.com>
      0153a413
    • Russ Cox's avatar
      cmd/dist: force non-devel version for cross-build buildlets · b09e2de7
      Russ Cox authored
      If the compiler has a non-devel version it will report that version
      to the go command for use as the "compiler ID" instead of using
      the content ID of the binary. This in turn allows the go command
      to see the compiled-for-amd64 arm compiler and the compiled-for-arm
      arm compiler as having the same ID, so that packages cross-compiled
      from amd64 look up-to-date when copied to the arm system
      during the linux-arm buildlets and trybots.
      
      Change-Id: I76cbf129303941f8e31bdb100e263478159ddaa5
      Reviewed-on: https://go-review.googlesource.com/74360
      Run-TryBot: Russ Cox <rsc@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      b09e2de7
    • Michael Munday's avatar
      math: optimize dim and remove s390x assembly implementation · b97688d1
      Michael Munday authored
      By calculating dim directly, rather than calling max, we can simplify
      the generated code significantly. The compiler now reports that dim
      is easily inlineable, but it can't be inlined because there is still
      an assembly stub for Dim.
      
      Since dim is now very simple I no longer think it is worth having
      assembly implementations of it. I have therefore removed the s390x
      assembly. Removing the other assembly for Dim is #21913.
      
      name  old time/op  new time/op  delta
      Dim   4.29ns ± 0%  3.53ns ± 0%  -17.62%  (p=0.000 n=9+8)
      
      Change-Id: Ic38a6b51603cbc661dcdb868ecf2b1947e9f399e
      Reviewed-on: https://go-review.googlesource.com/64194
      Run-TryBot: Michael Munday <mike.munday@ibm.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRobert Griesemer <gri@golang.org>
      b97688d1
    • Sam Whited's avatar
      encoding/xml: don't panic when custom Unmarshaler sees StartElement · be08ddbf
      Sam Whited authored
      Change-Id: I90aa0a983abd0080f3de75d3340fdb15c1f9ca35
      Reviewed-on: https://go-review.googlesource.com/70891Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      Run-TryBot: Sam Whited <sam@samwhited.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      be08ddbf
    • Terin Stock's avatar
      net/http/pprof: attach handlers using http.HandleFunc · 01c144c4
      Terin Stock authored
      Simplify how pprof attaches the handlers to the DefaultMux by using
      http.HandleFunc instead of manually wrapping the handlers in
      a http.HandlerFunc.
      
      Change-Id: I65db262ebb2e29e4b6f30df9d2688f5daf782c29
      Reviewed-on: https://go-review.googlesource.com/71251Reviewed-by: 's avatarSam Whited <sam@samwhited.com>
      Reviewed-by: 's avatarTom Bergan <tombergan@google.com>
      Run-TryBot: Sam Whited <sam@samwhited.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      01c144c4
    • Austin Clements's avatar
      runtime: use buffered write barrier for bulkBarrierPreWrite · 877387e3
      Austin Clements authored
      This modifies bulkBarrierPreWrite to use the buffered write barrier
      instead of the eager write barrier. This reduces the number of system
      stack switches and sanity checks by a factor of the buffer size
      (currently 256). This affects both typedmemmove and typedmemclr.
      
      Since this is purely a runtime change, it applies to all arches
      (unlike the pointer write barrier).
      
      name                 old time/op  new time/op  delta
      BulkWriteBarrier-12  7.33ns ± 6%  4.46ns ± 9%  -39.10%  (p=0.000 n=20+19)
      
      Updates #22460.
      
      Change-Id: I6a686a63bbf08be02b9b97250e37163c5a90cdd8
      Reviewed-on: https://go-review.googlesource.com/73832
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      877387e3
    • Austin Clements's avatar
      runtime: simplify and optimize typedslicecopy · 6a5f1e58
      Austin Clements authored
      Currently, typedslicecopy meticulously performs a typedmemmove on
      every element of the slice. This probably used to be necessary because
      we only had an individual element's type, but now we use the heap
      bitmap, so we only need to know whether the type has any pointers and
      how big it is. Hence, this CL rewrites typedslicecopy to simply
      perform one bulk barrier and one memmove.
      
      This also has a side-effect of eliminating two unnecessary write
      barriers per slice element that were coming from updates to dstp and
      srcp, which were stored in the parent stack frame. However, most of
      the win comes from eliminating the loops.
      
      name                 old time/op  new time/op  delta
      BulkWriteBarrier-12  7.83ns ±10%  7.33ns ± 6%  -6.45%  (p=0.000 n=20+20)
      
      Updates #22460.
      
      Change-Id: Id3450e9f36cc8e0892f268319b136f0d8f5464b8
      Reviewed-on: https://go-review.googlesource.com/73831
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      6a5f1e58
    • Austin Clements's avatar
      runtime: benchmark for bulk write barriers · f96b95bc
      Austin Clements authored
      This adds a benchmark of typedslicecopy and its bulk write barriers.
      
      For #22460.
      
      Change-Id: I439ca3b130bb22944468095f8f18b464e5bb43ca
      Reviewed-on: https://go-review.googlesource.com/74051
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      f96b95bc
    • Austin Clements's avatar
      cmd/compile: compiler support for buffered write barrier · 7e343134
      Austin Clements authored
      This CL implements the compiler support for calling the buffered write
      barrier added by the previous CL.
      
      Since the buffered write barrier is only implemented on amd64 right
      now, this still supports the old, eager write barrier as well. There's
      little overhead to supporting both and this way a few tests in
      test/fixedbugs that expect to have liveness maps at write barrier
      calls can easily opt-in to the old, eager barrier.
      
      This significantly improves the performance of the write barrier:
      
      name             old time/op  new time/op  delta
      WriteBarrier-12  73.5ns ±20%  19.2ns ±27%  -73.90%  (p=0.000 n=19+18)
      
      It also reduces the size of binaries because the write barrier call is
      more compact:
      
      name        old object-bytes  new object-bytes  delta
      Template           398k ± 0%         393k ± 0%  -1.14%  (p=0.008 n=5+5)
      Unicode            208k ± 0%         206k ± 0%  -1.00%  (p=0.008 n=5+5)
      GoTypes           1.18M ± 0%        1.15M ± 0%  -2.00%  (p=0.008 n=5+5)
      Compiler          4.05M ± 0%        3.88M ± 0%  -4.26%  (p=0.008 n=5+5)
      SSA               8.25M ± 0%        8.11M ± 0%  -1.59%  (p=0.008 n=5+5)
      Flate              228k ± 0%         224k ± 0%  -1.83%  (p=0.008 n=5+5)
      GoParser           295k ± 0%         284k ± 0%  -3.62%  (p=0.008 n=5+5)
      Reflect           1.00M ± 0%        0.99M ± 0%  -0.70%  (p=0.008 n=5+5)
      Tar                339k ± 0%         333k ± 0%  -1.67%  (p=0.008 n=5+5)
      XML                404k ± 0%         395k ± 0%  -2.10%  (p=0.008 n=5+5)
      [Geo mean]         704k              690k       -2.00%
      
      name        old exe-bytes     new exe-bytes     delta
      HelloSize         1.05M ± 0%        1.04M ± 0%  -1.55%  (p=0.008 n=5+5)
      
      https://perf.golang.org/search?q=upload:20171027.1
      
      (Amusingly, this also reduces compiler allocations by 0.75%, which,
      combined with the better write barrier, speeds up the compiler overall
      by 2.10%. See the perf link.)
      
      It slightly improves the performance of most of the go1 benchmarks and
      improves the performance of the x/benchmarks:
      
      name                      old time/op    new time/op    delta
      BinaryTree17-12              2.40s ± 1%     2.47s ± 1%  +2.69%  (p=0.000 n=19+19)
      Fannkuch11-12                2.95s ± 0%     2.95s ± 0%  +0.21%  (p=0.000 n=20+19)
      FmtFprintfEmpty-12          41.8ns ± 4%    41.4ns ± 2%  -1.03%  (p=0.014 n=20+20)
      FmtFprintfString-12         68.7ns ± 2%    67.5ns ± 1%  -1.75%  (p=0.000 n=20+17)
      FmtFprintfInt-12            79.0ns ± 3%    77.1ns ± 1%  -2.40%  (p=0.000 n=19+17)
      FmtFprintfIntInt-12          127ns ± 1%     123ns ± 3%  -3.42%  (p=0.000 n=20+20)
      FmtFprintfPrefixedInt-12     152ns ± 1%     150ns ± 1%  -1.02%  (p=0.000 n=18+17)
      FmtFprintfFloat-12           211ns ± 1%     209ns ± 0%  -0.99%  (p=0.000 n=20+16)
      FmtManyArgs-12               500ns ± 0%     496ns ± 0%  -0.73%  (p=0.000 n=17+20)
      GobDecode-12                6.44ms ± 1%    6.53ms ± 0%  +1.28%  (p=0.000 n=20+19)
      GobEncode-12                5.46ms ± 0%    5.46ms ± 1%    ~     (p=0.550 n=19+20)
      Gzip-12                      220ms ± 1%     216ms ± 0%  -1.75%  (p=0.000 n=19+19)
      Gunzip-12                   38.8ms ± 0%    38.6ms ± 0%  -0.30%  (p=0.000 n=18+19)
      HTTPClientServer-12         79.0µs ± 1%    78.2µs ± 1%  -1.01%  (p=0.000 n=20+20)
      JSONEncode-12               11.9ms ± 0%    11.9ms ± 0%  -0.29%  (p=0.000 n=20+19)
      JSONDecode-12               52.6ms ± 0%    52.2ms ± 0%  -0.68%  (p=0.000 n=19+20)
      Mandelbrot200-12            3.69ms ± 0%    3.68ms ± 0%  -0.36%  (p=0.000 n=20+20)
      GoParse-12                  3.13ms ± 1%    3.18ms ± 1%  +1.67%  (p=0.000 n=19+20)
      RegexpMatchEasy0_32-12      73.2ns ± 1%    72.3ns ± 1%  -1.19%  (p=0.000 n=19+18)
      RegexpMatchEasy0_1K-12       241ns ± 0%     239ns ± 0%  -0.83%  (p=0.000 n=17+16)
      RegexpMatchEasy1_32-12      68.6ns ± 1%    69.0ns ± 1%  +0.47%  (p=0.015 n=18+16)
      RegexpMatchEasy1_1K-12       364ns ± 0%     361ns ± 0%  -0.67%  (p=0.000 n=16+17)
      RegexpMatchMedium_32-12      104ns ± 1%     103ns ± 1%  -0.79%  (p=0.001 n=20+15)
      RegexpMatchMedium_1K-12     33.8µs ± 3%    34.0µs ± 2%    ~     (p=0.267 n=20+19)
      RegexpMatchHard_32-12       1.64µs ± 1%    1.62µs ± 2%  -1.25%  (p=0.000 n=19+18)
      RegexpMatchHard_1K-12       49.2µs ± 0%    48.7µs ± 1%  -0.93%  (p=0.000 n=19+18)
      Revcomp-12                   391ms ± 5%     396ms ± 7%    ~     (p=0.154 n=19+19)
      Template-12                 63.1ms ± 0%    59.5ms ± 0%  -5.76%  (p=0.000 n=18+19)
      TimeParse-12                 307ns ± 0%     306ns ± 0%  -0.39%  (p=0.000 n=19+17)
      TimeFormat-12                325ns ± 0%     323ns ± 0%  -0.50%  (p=0.000 n=19+19)
      [Geo mean]                  47.3µs         46.9µs       -0.67%
      
      https://perf.golang.org/search?q=upload:20171026.1
      
      name                       old time/op  new time/op  delta
      Garbage/benchmem-MB=64-12  2.25ms ± 1%  2.20ms ± 1%  -2.31%  (p=0.000 n=18+18)
      HTTP-12                    12.6µs ± 0%  12.6µs ± 0%  -0.72%  (p=0.000 n=18+17)
      JSON-12                    11.0ms ± 0%  11.0ms ± 1%  -0.68%  (p=0.000 n=17+19)
      
      https://perf.golang.org/search?q=upload:20171026.2
      
      Updates #14951.
      Updates #22460.
      
      Change-Id: Id4c0932890a1d41020071bec73b8522b1367d3e7
      Reviewed-on: https://go-review.googlesource.com/73712
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      7e343134
    • Austin Clements's avatar
      runtime: buffered write barrier implementation · e9079a69
      Austin Clements authored
      This implements runtime support for buffered write barriers on amd64.
      The buffered write barrier has a fast path that simply enqueues
      pointers in a per-P buffer. Unlike the current write barrier, this
      fast path is *not* a normal Go call and does not require the compiler
      to spill general-purpose registers or put arguments on the stack. When
      the buffer fills up, the write barrier takes the slow path, which
      spills all general purpose registers and flushes the buffer. We don't
      allow safe-points or stack splits while this frame is active, so it
      doesn't matter that we have no type information for the spilled
      registers in this frame.
      
      One minor complication is cgocheck=2 mode, which uses the write
      barrier to detect Go pointers being written to non-Go memory. We
      obviously can't buffer this, so instead we set the buffer to its
      minimum size, forcing the write barrier into the slow path on every
      call. For this specific case, we pass additional information as
      arguments to the flush function. This also requires enabling the cgo
      write barrier slightly later during runtime initialization, after Ps
      (and the per-P write barrier buffers) have been initialized.
      
      The code in this CL is not yet active. The next CL will modify the
      compiler to generate calls to the new write barrier.
      
      This reduces the average cost of the write barrier by roughly a factor
      of 4, which will pay for the cost of having it enabled more of the
      time after we make the GC pacer less aggressive. (Benchmarks will be
      in the next CL.)
      
      Updates #14951.
      Updates #22460.
      
      Change-Id: I396b5b0e2c5e5c4acfd761a3235fd15abadc6cb1
      Reviewed-on: https://go-review.googlesource.com/73711
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      e9079a69
    • Austin Clements's avatar
      runtime: add benchmark for write barriers · 1e8ab99b
      Austin Clements authored
      For #22460.
      
      Change-Id: I798f26d45bbe1efd16b632e201413cb26cb3e6c7
      Reviewed-on: https://go-review.googlesource.com/73811
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      1e8ab99b
    • Austin Clements's avatar
      runtime: make systemstack tail call if already switched · 15d6ab69
      Austin Clements authored
      Currently systemstack always calls its argument, even if we're already
      on the system stack. Unfortunately, traceback with _TraceJump stops at
      the first systemstack it sees, which often cuts off runtime stacks
      early in profiles.
      
      Fix this by performing a tail call if we're already on the system
      stack. This eliminates it from the traceback entirely, so it won't
      stop prematurely (or all get mushed into a single node in the profile
      graph).
      
      Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
      Reviewed-on: https://go-review.googlesource.com/74050Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      Reviewed-by: 's avatarKeith Randall <khr@golang.org>
      15d6ab69
    • Russ Cox's avatar
      misc/cgo/testshared: don't assume mtimes trigger rebuilds · 67a7d5d8
      Russ Cox authored
      The upcoming CL 73212 will see through mtime modifications.
      Change the underlying file too.
      
      Change-Id: Ib23b4136a62ee87bce408b76bb0385451ae7dcd2
      Reviewed-on: https://go-review.googlesource.com/74130
      Run-TryBot: Russ Cox <rsc@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      67a7d5d8
    • Lynn Boger's avatar
      cmd/compile,cmd/internal/obj/ppc64: make math.Abs,math.Copysign instrinsics on ppc64x · 4d0151ed
      Lynn Boger authored
      This adds support for math Abs, Copysign to be instrinsics on ppc64x.
      
      New instruction FCPSGN is added to generate fcpsgn. Some new
      rules are added to improve the int<->float conversions that are
      generated mainly due to the Float64bits and Float64frombits in
      the math package. PPC64.rules is also modified as suggested
      in the review for CL 63290.
      
      Improvements:
      benchmark                           old ns/op     new ns/op     delta
      BenchmarkAbs-16                   1.12          0.69          -38.39%
      BenchmarkCopysign-16              1.30          0.93          -28.46%
      BenchmarkNextafter32-16           9.34          8.05          -13.81%
      BenchmarkFrexp-16                 8.81          7.60          -13.73%
      
      Others that used Copysign also saw smaller improvements.
      
      I attempted to make this work using rules since that
      seems to be preferred, but due to the use of Float64bits and
      Float64frombits in these functions, several rules had to be added and
      even then not all cases were matched. Using rules became too
      complicated and seemed too fragile for these.
      
      Updates #21390
      
      Change-Id: Ia265da9a18355e08000818a4fba1a40e9e031995
      Reviewed-on: https://go-review.googlesource.com/67130
      Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
      Reviewed-by: 's avatarKeith Randall <khr@golang.org>
      4d0151ed
    • Lynn Boger's avatar
      runtime: use -buildmode=pie in testCgoPprofPIE instead of -extldflags=-pie · 58de9f35
      Lynn Boger authored
      Errors occur in runtime test testCgoPprofPIE when the test
      is built by passing -pie to the external linker with code
      that was not built as PIC. This occurs on ppc64le because
      non-PIC is the default, and fails only on newer distros
      where the address range used for programs is high enough
      to cause relocation overflow. This test should be built
      with -buildmode=pie since that correctly generates PIC
      with -pie.
      
      Related issues are #21954 and #22126.
      
      Updates #22459
      
      Change-Id: Ib641440bc9f94ad2b97efcda14a4b482647be8f7
      Reviewed-on: https://go-review.googlesource.com/73970
      Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      58de9f35
    • Hugues Bruant's avatar
      cmd/compile: fix incorrect go:noinline usage · 3c46f49f
      Hugues Bruant authored
      This pragma is not actually honored by the compiler.
      The tests implicitly relied on the inliner being unable
      to inline closures with captured variables, which will
      soon change.
      
      Fixes #22208
      
      Change-Id: I13abc9c930b9156d43ec216f8efb768952a29439
      Reviewed-on: https://go-review.googlesource.com/73211Reviewed-by: 's avatarMichael Munday <mike.munday@ibm.com>
      3c46f49f
  2. 29 Oct, 2017 12 commits
    • Russ Cox's avatar
      cmd/go: adjust default GOROOT to prefer runtime.GOROOT() spelling · eac6fe08
      Russ Cox authored
      If runtime.GOROOT() and the os.Executable method for finding GOROOT
      find the same directory but with different spellings, prefer the spelling
      returned by runtime.GOROOT().
      
      This avoids an inconsistency if "pwd" returns one spelling but a
      different spelling is used in $PATH (and therefore in os.Executable()).
      make.bash runs with GOROOT=$(cd .. && pwd); the goal is to allow
      the resulting toolchain to use that default setting (unless moved)
      even if the directory spelling is different in $PATH.
      
      Change-Id: If96b28b9e8697f4888f153a400b40bbf58a9128b
      Reviewed-on: https://go-review.googlesource.com/74250
      Run-TryBot: Russ Cox <rsc@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      eac6fe08
    • Austin Clements's avatar
      runtime: eliminate remaining recordspan write barriers · 164e1b84
      Austin Clements authored
      recordspan has two remaining write barriers from writing to the
      pointer to the backing store of h.allspans. However, h.allspans is
      always backed by off-heap memory, so let the compiler know this.
      Unfortunately, this isn't quite as clean as most go:notinheap uses
      because we can't directly name the backing store of a slice, but we
      can get it done with some judicious casting.
      
      For #22460.
      
      Change-Id: I296f92fa41cf2cb6ae572b35749af23967533877
      Reviewed-on: https://go-review.googlesource.com/73414Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      164e1b84
    • Austin Clements's avatar
      cmd/compile: elide write barriers for copy of notinheap pointers · b78b54ff
      Austin Clements authored
      Currently copy and append for types containing only scalars and
      notinheap pointers still get compiled to have write barriers, even
      though those write barriers are unnecessary. Fix these to use
      HasHeapPointer instead of just Haspointer so that they elide write
      barriers when possible.
      
      This fixes the unnecessary write barrier in runtime.recordspan when it
      grows the h.allspans slice. This is important because recordspan gets
      called (*very* indirectly) from (*gcWork).tryGet, which is
      go:nowritebarrierrec. Unfortunately, the compiler's analysis has no
      hope of seeing this because it goes through the indirect call
      fixalloc.first, but I saw it happen.
      
      Change-Id: Ieba3abc555a45f573705eab780debcfe5c4f5dd1
      Reviewed-on: https://go-review.googlesource.com/73413
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarMatthew Dempsky <mdempsky@google.com>
      b78b54ff
    • Austin Clements's avatar
      cmd/compile: make HasHeapPointer recursive · 316e3036
      Austin Clements authored
      Currently (*Type).HasHeapPointer only ignores pointers go:notinheap
      types if the type itself is a pointer to a go:notinheap type. However,
      if it's some other type that contains pointers where all of those
      pointers are go:notinheap, it will conservatively return true. As a
      result, we'll use write barriers where they aren't needed, for example
      calling typedmemmove instead of just memmove on structs that contain
      only go:notinheap pointers.
      
      Fix this by making HasHeapPointer walk the whole type looking for
      pointers that aren't marked go:notinheap.
      
      Change-Id: Ib8c6abf6f7a20f34969d1d402c5498e0b990be59
      Reviewed-on: https://go-review.googlesource.com/73412
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarMatthew Dempsky <mdempsky@google.com>
      316e3036
    • Austin Clements's avatar
      cmd/compile: report typedslicecopy write barriers · afbe646a
      Austin Clements authored
      Most write barrier calls are inserted by SSA, but copy and append are
      lowered to runtime.typedslicecopy during walk. Fix these to set
      Func.WBPos and emit the "write barrier" warning, as done for the write
      barriers inserted by SSA. As part of this, we refactor setting WBPos
      and emitting this warning into the frontend so it can be shared by
      both walk and SSA.
      
      Change-Id: I5fe9997d9bdb55e03e01dd58aee28908c35f606b
      Reviewed-on: https://go-review.googlesource.com/73411
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarMatthew Dempsky <mdempsky@google.com>
      afbe646a
    • Adam Langley's avatar
      crypto/{ecdsa,rsa}: rename argument to PrivateKey.Sign. · 507ca082
      Adam Langley authored
      The crypto.Signer interface takes pre-hased messages for ECDSA and RSA,
      but the argument in the implementations was called “msg”, not “digest”,
      which is confusing.
      
      This change renames them to help clarify the intended use.
      
      Change-Id: Ie2fb8753ca5280e493810d211c7c66223f94af88
      Reviewed-on: https://go-review.googlesource.com/70950Reviewed-by: 's avatarFilippo Valsorda <hi@filippo.io>
      507ca082
    • Austin Clements's avatar
      cmd/compile: improve coverage of nowritebarrierrec check · 5a4b6bce
      Austin Clements authored
      The current go:nowritebarrierrec checker has two problems that limit
      its coverage:
      
      1. It doesn't understand that systemstack calls its argument, which
      means there are several cases where we fail to detect prohibited write
      barriers.
      
      2. It only observes calls in the AST, so calls constructed during
      lowering by SSA aren't followed.
      
      This CL completely rewrites this checker to address these issues.
      
      The current checker runs entirely after walk and uses visitBottomUp,
      which introduces several problems for checking across systemstack.
      First, visitBottomUp itself doesn't understand systemstack calls, so
      the callee may be ordered after the caller, causing the checker to
      fail to propagate constraints. Second, many systemstack calls are
      passed a closure, which is quite difficult to resolve back to the
      function definition after transformclosure and walk have run. Third,
      visitBottomUp works exclusively on the AST, so it can't observe calls
      created by SSA.
      
      To address these problems, this commit splits the check into two
      phases and rewrites it to use a call graph generated during SSA
      lowering. The first phase runs before transformclosure/walk and simply
      records systemstack arguments when they're easy to get. Then, it
      modifies genssa to record static call edges at the point where we're
      lowering to Progs (which is the latest point at which position
      information is conveniently available). Finally, the second phase runs
      after all functions have been lowered and uses a direct BFS walk of
      the call graph (combining systemstack calls with static calls) to find
      prohibited write barriers and construct nice error messages.
      
      Fixes #22384.
      For #22460.
      
      Change-Id: I39668f7f2366ab3c1ab1a71eaf25484d25349540
      Reviewed-on: https://go-review.googlesource.com/72773
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarMatthew Dempsky <mdempsky@google.com>
      5a4b6bce
    • Austin Clements's avatar
      runtime: allow write barriers in gchelper · 3526d803
      Austin Clements authored
      We're about to start tracking nowritebarrierrec through systemstack
      calls, which detects that we're calling markroot (which has write
      barriers) from gchelper, which is called from the scheduler during STW
      apparently without a P.
      
      But it turns out that func helpgc, which wakes up blocked Ms to run
      gchelper, installs a P for gchelper to use. This means there *is* a P
      when gchelper runs, so it is allowed to have write barriers. Tell the
      compiler this by marking gchelper go:yeswritebarrierrec. Also,
      document the call to gchelper so I don't have to spend another half a
      day puzzling over how on earth this could possibly work before
      discovering the spooky action-at-a-distance in helpgc.
      
      Updates #22384.
      For #22460.
      
      Change-Id: I7394c9b4871745575f87a2d4fbbc5b8e54d669f7
      Reviewed-on: https://go-review.googlesource.com/72772
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      3526d803
    • Austin Clements's avatar
      runtime: eliminate write barriers from persistentalloc · d941b075
      Austin Clements authored
      We're about to start tracking nowritebarrierrec through systemstack
      calls, which will reveal write barriers in persistentalloc prohibited
      by various callers.
      
      The pointers manipulated by persistentalloc are always to off-heap
      memory, so this removes these write barriers statically by introducing
      a new go:notinheap type to represent generic off-heap memory.
      
      Updates #22384.
      For #22460.
      
      Change-Id: Id449d9ebf145b14d55476a833e7f076b0d261d57
      Reviewed-on: https://go-review.googlesource.com/72771
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      d941b075
    • Austin Clements's avatar
      runtime: allow write barriers in startpanic_m · 070cc8eb
      Austin Clements authored
      We're about to start tracking nowritebarrierrec through systemstack
      calls, which will reveal write barriers in startpanic_m prohibited by
      various callers.
      
      We actually can allow write barriers here because the write barrier is
      a no-op when we're panicking. Let the compiler know.
      
      Updates #22384.
      For #22460.
      
      Change-Id: Ifb3a38d3dd9a4125c278c3680f8648f987a5b0b8
      Reviewed-on: https://go-review.googlesource.com/72770
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      070cc8eb
    • Austin Clements's avatar
      runtime: mark gcWork methods nowritebarrierrec · 249b5cc9
      Austin Clements authored
      Currently most of these are marked go:nowritebarrier as a hint, but
      it's actually important that these not invoke write barriers
      recursively. The danger is that some gcWork method would invoke the
      write barrier while the gcWork is in an inconsistent state and that
      the write barrier would in turn invoke some other gcWork method, which
      would crash or permanently corrupt the gcWork. Simply marking the
      write barrier itself as go:nowritebarrierrec isn't sufficient to
      prevent this if the write barrier doesn't use the outer method.
      
      Thankfully, this doesn't cause any build failures, so we were getting
      this right. :)
      
      For #22460.
      
      Change-Id: I35a7292a584200eb35a49507cd3fe359ba2206f6
      Reviewed-on: https://go-review.googlesource.com/72554
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      249b5cc9
    • Austin Clements's avatar
      runtime: remove write barriers from newstack, gogo · 3beaf26e
      Austin Clements authored
      Currently, newstack and gogo have write barriers for maintaining the
      context register saved in g.sched.ctxt. This is troublesome, because
      newstack can be called from go:nowritebarrierrec places that can't
      allow write barriers. It happens to be benign because g.sched.ctxt
      will always be nil on entry to newstack *and* it so happens the
      incoming ctxt will also always be nil in these contexts (I
      think/hope), but this is playing with fire. It's also desirable to
      mark newstack go:nowritebarrierrec to prevent any other, non-benign
      write barriers from creeping in, but we can't do that right now
      because of this one write barrier.
      
      Fix all of this by observing that g.sched.ctxt is really just a saved
      live pointer register. Hence, we can shade it when we scan g's stack
      and otherwise move it back and forth between the actual context
      register and g.sched.ctxt without write barriers. This means we can
      save it in morestack along with all of the other g.sched, eliminate
      the save from newstack along with its troublesome write barrier, and
      eliminate the shenanigans in gogo to invoke the write barrier when
      restoring it.
      
      Once we've done all of this, we can mark newstack
      go:nowritebarrierrec.
      
      Fixes #22385.
      For #22460.
      
      Change-Id: I43c24958e3f6785b53c1350e1e83c2844e0d1522
      Reviewed-on: https://go-review.googlesource.com/72553
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      3beaf26e
  3. 28 Oct, 2017 5 commits
  4. 27 Oct, 2017 4 commits
    • Russ Cox's avatar
      cmd/dist: translate /private/var to /var on darwin builders · 4f2ee499
      Russ Cox authored
      This is ugly but needed on the builders, because they do not set
      PWD/GOROOT consistently, and the new content-based staleness
      understands that the setting of GOROOT influences the content in
      the linker outputs.
      
      Change-Id: I0606f2c70b719619b188864ad3ae1b34432140cb
      Reviewed-on: https://go-review.googlesource.com/74070
      Run-TryBot: Russ Cox <rsc@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      4f2ee499
    • Joshua Rubin's avatar
      net/http: fix minor leak in Header.WriteSubset · a4d03a9b
      Joshua Rubin authored
      Header.WriteSubset uses a sync.Pool but wouldn't Put the sorter back in
      the pool if there was an error writing to the io.Writer
      
      I'm not really sure why the sorter is returned to begin with. The
      comment says "for possible return to headerSorterCache".
      
      This also doesn't address potential panics that might occur, but the
      overhead of doing the Put in a defer would likely be too great.
      
      Change-Id: If3c45a4c3e11f6ec65d187e25b63455b0142d4e3
      Reviewed-on: https://go-review.googlesource.com/73910
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarTom Bergan <tombergan@google.com>
      a4d03a9b
    • Than McIntosh's avatar
      cmd/compile, cmd/link: support for DWARF file reference relocations · b52b77cb
      Than McIntosh authored
      New relocation flavor R_DWARFFILEREF, to be applied to DWARF attribute
      values that correspond to file references (ex: DW_AT_decl_file,
      DW_AT_call_file). The LSym for this relocation is the file itself; the
      linker replaces the relocation target with the index of the specified
      file in the line table's file section.
      
      Note: for testing purposes this patch changes the DWARF function
      subprogram DIE abbrev to include DW_AT_decl_file (allowed by DWARF
      but not especially useful) so as to have a way to test this
      functionality. This attribute will be removed once there are other
      file reference attributes (coming as part of inlining support).
      
      Change-Id: Icf676beb60fcc33f06d78e747ef717532daaa3ba
      Reviewed-on: https://go-review.googlesource.com/73330
      Run-TryBot: Than McIntosh <thanm@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarAustin Clements <austin@google.com>
      b52b77cb
    • Russ Cox's avatar
      cmd/dist: use latest heap, sort packages for compiler bootstrap · a93bc1d2
      Russ Cox authored
      The compiler depends on the way heap and sort break ties
      in some cases. Instead of trying to find them all, bundle
      those packages into the bootstrap compiler builds.
      
      The overall goal is that Go1.4 building cmd/compile during the
      bootstrap process produces a semantically equivalent compiler
      to cmd/compile compiling itself. After this CL, that property is true,
      at least for the compiler compiling itself and the other tools.
      
      A test for this property will be in CL 73212.
      
      Change-Id: Icc1ba7cbe828f5673e8198ebacb18c7c01f3a735
      Reviewed-on: https://go-review.googlesource.com/73952
      Run-TryBot: Russ Cox <rsc@golang.org>
      Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      a93bc1d2