1. 31 Oct, 2016 7 commits
  2. 30 Oct, 2016 10 commits
  3. 29 Oct, 2016 12 commits
  4. 28 Oct, 2016 11 commits
    • Ian Lance Taylor's avatar
      time: clarify Equal docs · 023556c0
      Ian Lance Taylor authored
      The docs used to imply that using == would compare Locations, but of
      course it just compares Location pointers, which will have unpredictable
      results depending on how the pointers are loaded.
      
      Change-Id: I783c1309e476a9616a1c1c290eac713aba3b0b57
      Reviewed-on: https://go-review.googlesource.com/32332Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      023556c0
    • Mikio Hara's avatar
      net: use IPv4 reserved address blocks for documentation · 9575c580
      Mikio Hara authored
      Updates #15228.
      
      Change-Id: Iefdffa146703ee1c04afc2b71d9de1f0a0811f86
      Reviewed-on: https://go-review.googlesource.com/32146
      Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      9575c580
    • Mikio Hara's avatar
      net: deflake TestLookupNonLDH · 69b7fe1a
      Mikio Hara authored
      Fixes #17623.
      
      Change-Id: I4717e8399f955c9be7ba19108bb0bcc108187c04
      Reviewed-on: https://go-review.googlesource.com/32147
      Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      69b7fe1a
    • Peter Weinberger's avatar
      runtime: ensure elapsed cycles are not negative · a1b6e169
      Peter Weinberger authored
      On solaris/amd64 sometimes the reported cycle count is negative. Replace
      with 0.
      
      Change-Id: I364eea5ca072281245c7ab3afb0bf69adc3a8eae
      Reviewed-on: https://go-review.googlesource.com/32258Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      a1b6e169
    • Matthew Dempsky's avatar
      cmd/compile: improve not enough / too many arguments errors · ec5b6406
      Matthew Dempsky authored
      Use "have" and "want" and multiple lines like other similar error
      messages. Also, fix handling of ... and multi-value function calls.
      
      Fixes #17650.
      
      Change-Id: I4850e79c080eac8df3b92a4accf9e470dff63c9a
      Reviewed-on: https://go-review.googlesource.com/32261Reviewed-by: 's avatarRobert Griesemer <gri@golang.org>
      Run-TryBot: Matthew Dempsky <mdempsky@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      ec5b6406
    • Austin Clements's avatar
      runtime: fix SP adjustment on amd64p32 · 1bd39e79
      Austin Clements authored
      On amd64p32, rt0_go attempts to reserve 128 bytes of scratch space on
      the stack, but due to a register mixup this ends up being a no-op. Fix
      this so we actually reserve the stack space.
      
      Change-Id: I04dbfbeb44f3109528c8ec74e1136bc00d7e1faa
      Reviewed-on: https://go-review.googlesource.com/32331
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      1bd39e79
    • Austin Clements's avatar
      runtime: disable stack rescanning by default · bd640c88
      Austin Clements authored
      With the hybrid barrier in place, we can now disable stack rescanning
      by default. This commit adds a "gcrescanstacks" GODEBUG variable that
      is off by default but can be set to re-enable STW stack rescanning.
      The plan is to leave this off but available in Go 1.8 for debugging
      and as a fallback.
      
      With this change, worst-case mark termination time at GOMAXPROCS=12
      *not* including time spent stopping the world (which is still
      unbounded) is reliably under 100 µs, with a 95%ile around 50 µs in
      every benchmark I tried (the go1 benchmarks, the x/benchmarks garbage
      benchmark, and the gcbench activegs and rpc benchmarks). Including
      time spent stopping the world usually adds about 20 µs to total STW
      time at GOMAXPROCS=12, but I've seen it add around 150 µs in these
      benchmarks when a goroutine takes time to reach a safe point (see
      issue #10958) or when stopping the world races with goroutine
      switches. At GOMAXPROCS=1, where this isn't an issue, worst case STW
      is typically 30 µs.
      
      The go-gcbench activegs benchmark is designed to stress large numbers
      of dirty stacks. This commit reduces 95%ile STW time for 500k dirty
      stacks by nearly three orders of magnitude, from 150ms to 195µs.
      
      This has little effect on the throughput of the go1 benchmarks or the
      x/benchmarks benchmarks.
      
      name         old time/op  new time/op  delta
      XGarbage-12  2.31ms ± 0%  2.32ms ± 1%  +0.28%  (p=0.001 n=17+16)
      XJSON-12     12.4ms ± 0%  12.4ms ± 0%  +0.41%  (p=0.000 n=18+18)
      XHTTP-12     11.8µs ± 0%  11.8µs ± 1%    ~     (p=0.492 n=20+18)
      
      It reduces the tail latency of the x/benchmarks HTTP benchmark:
      
      name      old p50-time  new p50-time  delta
      XHTTP-12    489µs ± 0%    491µs ± 1%  +0.54%  (p=0.000 n=20+18)
      
      name      old p95-time  new p95-time  delta
      XHTTP-12    957µs ± 1%    960µs ± 1%  +0.28%  (p=0.002 n=20+17)
      
      name      old p99-time  new p99-time  delta
      XHTTP-12   1.76ms ± 1%   1.64ms ± 1%  -7.20%  (p=0.000 n=20+18)
      
      Comparing to the beginning of the hybrid barrier implementation
      ("runtime: parallelize STW mcache flushing") shows that the hybrid
      barrier trades a small performance impact for much better STW latency,
      as expected. The magnitude of the performance impact is generally
      small:
      
      name                      old time/op    new time/op    delta
      BinaryTree17-12              2.37s ± 1%     2.42s ± 1%  +2.04%  (p=0.000 n=19+18)
      Fannkuch11-12                2.84s ± 0%     2.72s ± 0%  -4.00%  (p=0.000 n=19+19)
      FmtFprintfEmpty-12          44.2ns ± 1%    45.2ns ± 1%  +2.20%  (p=0.000 n=17+19)
      FmtFprintfString-12          130ns ± 1%     134ns ± 0%  +2.94%  (p=0.000 n=18+16)
      FmtFprintfInt-12             114ns ± 1%     117ns ± 0%  +3.01%  (p=0.000 n=19+15)
      FmtFprintfIntInt-12          176ns ± 1%     182ns ± 0%  +3.17%  (p=0.000 n=20+15)
      FmtFprintfPrefixedInt-12     186ns ± 1%     187ns ± 1%  +1.04%  (p=0.000 n=20+19)
      FmtFprintfFloat-12           251ns ± 1%     250ns ± 1%  -0.74%  (p=0.000 n=17+18)
      FmtManyArgs-12               746ns ± 1%     761ns ± 0%  +2.08%  (p=0.000 n=19+20)
      GobDecode-12                6.57ms ± 1%    6.65ms ± 1%  +1.11%  (p=0.000 n=19+20)
      GobEncode-12                5.59ms ± 1%    5.65ms ± 0%  +1.08%  (p=0.000 n=17+17)
      Gzip-12                      223ms ± 1%     223ms ± 1%  -0.31%  (p=0.006 n=20+20)
      Gunzip-12                   38.0ms ± 0%    37.9ms ± 1%  -0.25%  (p=0.009 n=19+20)
      HTTPClientServer-12         77.5µs ± 1%    78.9µs ± 2%  +1.89%  (p=0.000 n=20+20)
      JSONEncode-12               14.7ms ± 1%    14.9ms ± 0%  +0.75%  (p=0.000 n=20+20)
      JSONDecode-12               53.0ms ± 1%    55.9ms ± 1%  +5.54%  (p=0.000 n=19+19)
      Mandelbrot200-12            3.81ms ± 0%    3.81ms ± 1%  +0.20%  (p=0.023 n=17+19)
      GoParse-12                  3.17ms ± 1%    3.18ms ± 1%    ~     (p=0.057 n=20+19)
      RegexpMatchEasy0_32-12      71.7ns ± 1%    70.4ns ± 1%  -1.77%  (p=0.000 n=19+20)
      RegexpMatchEasy0_1K-12       946ns ± 0%     946ns ± 0%    ~     (p=0.405 n=18+18)
      RegexpMatchEasy1_32-12      67.2ns ± 2%    67.3ns ± 2%    ~     (p=0.732 n=20+20)
      RegexpMatchEasy1_1K-12       374ns ± 1%     378ns ± 1%  +1.14%  (p=0.000 n=18+19)
      RegexpMatchMedium_32-12      107ns ± 1%     107ns ± 1%    ~     (p=0.259 n=18+20)
      RegexpMatchMedium_1K-12     34.2µs ± 1%    34.5µs ± 1%  +1.03%  (p=0.000 n=18+18)
      RegexpMatchHard_32-12       1.77µs ± 1%    1.79µs ± 1%  +0.73%  (p=0.000 n=19+18)
      RegexpMatchHard_1K-12       53.6µs ± 1%    54.2µs ± 1%  +1.10%  (p=0.000 n=19+19)
      Template-12                 61.5ms ± 1%    63.9ms ± 0%  +3.96%  (p=0.000 n=18+18)
      TimeParse-12                 303ns ± 1%     300ns ± 1%  -1.08%  (p=0.000 n=19+20)
      TimeFormat-12                318ns ± 1%     320ns ± 0%  +0.79%  (p=0.000 n=19+19)
      Revcomp-12 (*)               509ms ± 3%     504ms ± 0%    ~     (p=0.967 n=7+12)
      [Geo mean]                  54.3µs         54.8µs       +0.88%
      
      (*) Revcomp is highly non-linear, so I only took samples with 2
      iterations.
      
      name         old time/op  new time/op  delta
      XGarbage-12  2.25ms ± 0%  2.32ms ± 1%  +2.74%  (p=0.000 n=16+16)
      XJSON-12     11.6ms ± 0%  12.4ms ± 0%  +6.81%  (p=0.000 n=18+18)
      XHTTP-12     11.6µs ± 1%  11.8µs ± 1%  +1.62%  (p=0.000 n=17+18)
      
      Updates #17503.
      
      Updates #17099, since you can't have a rescan list bug if there's no
      rescan list. I'm not marking it as fixed, since gcrescanstacks can
      still be set to re-enable the rescan lists.
      
      Change-Id: I6e926b4c2dbd4cd56721869d4f817bdbb330b851
      Reviewed-on: https://go-review.googlesource.com/31766Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      bd640c88
    • Austin Clements's avatar
      runtime: implement unconditional hybrid barrier · 5380b229
      Austin Clements authored
      This implements the unconditional version of the hybrid deletion write
      barrier, which always shades both the old and new pointer. It's
      unconditional for now because barriers on channel operations require
      checking both the source and destination stacks and we don't have a
      way to funnel this information into the write barrier at the moment.
      
      As part of this change, we modify the typed memclr operations
      introduced earlier to invoke the write barrier.
      
      This has basically no overall effect on benchmark performance. This is
      good, since it indicates that neither the extra shade nor the new bulk
      clear barriers have much effect. It also has little effect on latency.
      This is expected, since we haven't yet modified mark termination to
      take advantage of the hybrid barrier.
      
      Updates #17503.
      
      Change-Id: Iebedf84af2f0e857bd5d3a2d525f760b5cf7224b
      Reviewed-on: https://go-review.googlesource.com/31765Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      5380b229
    • Austin Clements's avatar
      runtime: avoid getfull() barrier most of the time · ee3d2012
      Austin Clements authored
      With the hybrid barrier, unless we're doing a STW GC or hit a very
      rare race (~once per all.bash) that can start mark termination before
      all of the work is drained, we don't need to drain the work queue at
      all. Even draining an empty work queue is rather expensive since we
      have to enter the getfull() barrier, so it's worth avoiding this.
      
      Conveniently, it's quite easy to detect whether or not we actually
      need the getufull() barrier: since the world is stopped when we enter
      mark termination, everything must have flushed its work to the work
      queue, so we can just check the queue. If the queue is empty and we
      haven't queued up any jobs that may create more work (which should
      always be the case with the hybrid barrier), we can simply have all GC
      workers perform non-blocking drains.
      
      Also conveniently, this solution is quite safe. If we do somehow screw
      something up and there's work on the work queue, some worker will
      still process it, it just may not happen in parallel.
      
      This is not the "right" solution, but it's simple, expedient,
      low-risk, and maintains compatibility with debug.gcrescanstacks. When
      we remove the gcrescanstacks fallback in Go 1.9, we should also fix
      the race that starts mark termination early, and then we can eliminate
      work draining from mark termination.
      
      Updates #17503.
      
      Change-Id: I7b3cd5de6a248ab29d78c2b42aed8b7443641361
      Reviewed-on: https://go-review.googlesource.com/32186Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      ee3d2012
    • Austin Clements's avatar
      runtime: remove unnecessary step from bulkBarrierPreWrite · d8256824
      Austin Clements authored
      Currently bulkBarrierPreWrite calls writebarrierptr_prewrite, but this
      means that we check writeBarrier.needed twice and perform cgo checks
      twice.
      
      Change bulkBarrierPreWrite to call writebarrierptr_prewrite1 to skip
      over these duplicate checks.
      
      This may speed up bulkBarrierPreWrite slightly, but mostly this will
      save us from running out of nosplit stack space on ppc64x in the near
      future.
      
      Updates #17503.
      
      Change-Id: I1cea1a2207e884ab1a279c6a5e378dcdc048b63e
      Reviewed-on: https://go-review.googlesource.com/31890Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      d8256824
    • Austin Clements's avatar
      runtime: add deletion barriers on gobuf.ctxt · 70c107c6
      Austin Clements authored
      gobuf.ctxt is set to nil from many places in assembly code and these
      assignments require write barriers with the hybrid barrier.
      
      Conveniently, in most of these places ctxt should already be nil, in
      which case we don't need the barrier. This commit changes these places
      to assert that ctxt is already nil.
      
      gogo is more complicated, since ctxt may not already be nil. For gogo,
      we manually perform the write barrier if ctxt is not nil.
      
      Updates #17503.
      
      Change-Id: I9d75e27c75a1b7f8b715ad112fc5d45ffa856d30
      Reviewed-on: https://go-review.googlesource.com/31764Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      70c107c6