1. 17 Dec, 2015 4 commits
  2. 16 Dec, 2015 23 commits
  3. 15 Dec, 2015 13 commits
    • Robert Griesemer's avatar
      spec: be clearer about which parameter section can be variadic · 57c81ef2
      Robert Griesemer authored
      Fixes #13595.
      
      Change-Id: I870ddc97ea25b7f6f7a1bb1a78e5e4874fba1ddc
      Reviewed-on: https://go-review.googlesource.com/17871Reviewed-by: 's avatarRob Pike <r@golang.org>
      57c81ef2
    • Brad Fitzpatrick's avatar
      net: add Dialer.Cancel to cancel pending dials · 24a83d35
      Brad Fitzpatrick authored
      Dialer.Cancel is a new optional <-chan struct{} channel whose closure
      indicates that the dial should be canceled. It is compatible with the
      x/net/context and http.Request.Cancel types.
      
      Tested by hand with:
      
      package main
      
          import (
                  "log"
                  "net"
                  "time"
          )
      
          func main() {
                  log.Printf("start.")
                  var d net.Dialer
                  cancel := make(chan struct{})
                  time.AfterFunc(2*time.Second, func() {
                          log.Printf("timeout firing")
                          close(cancel)
                  })
                  d.Cancel = cancel
                  c, err := d.Dial("tcp", "192.168.0.1:22")
                  if err != nil {
                          log.Print(err)
                          return
                  }
                  log.Fatalf("unexpected connect: %v", c)
          }
      
      Which says:
      
          2015/12/14 22:24:58 start.
          2015/12/14 22:25:00 timeout firing
          2015/12/14 22:25:00 dial tcp 192.168.0.1:22: operation was canceled
      
      Fixes #11225
      
      Change-Id: I2ef39e3a540e29fe6bfec03ab7a629a6b187fcb3
      Reviewed-on: https://go-review.googlesource.com/17821Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      24a83d35
    • Brad Fitzpatrick's avatar
      net/http: maybe deflake TestCancelRequestMidBody_h2 on linux-noopt builder · 479c47e4
      Brad Fitzpatrick authored
      This might deflake it. Or it'll at least give us more debugging clues.
      
      Fixes #13626 maybe
      
      Change-Id: Ie8cd0375d60dad033ec6a64830a90e7b9152a3d9
      Reviewed-on: https://go-review.googlesource.com/17825Reviewed-by: 's avatarAustin Clements <austin@google.com>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      479c47e4
    • Brad Fitzpatrick's avatar
      net/http: rework CloseNotifier implementation, clarify expectations in docs · 99fb1919
      Brad Fitzpatrick authored
      CloseNotifier wasn't well specified previously. This CL simplifies its
      implementation, clarifies the public documentation on CloseNotifier,
      clarifies internal documentation on conn, and fixes two CloseNotifier
      bugs in the process.
      
      The main change, though, is tightening the rules and expectations for using
      CloseNotifier:
      
      * the caller must consume the Request.Body first (old rule, unwritten)
      * the received value is the "true" value (old rule, unwritten)
      * no promises for channel sends after Handler returns (old rule, unwritten)
      * a subsequent pipelined request fires the CloseNotifier (new behavior;
        previously it never fired and thus effectively deadlocked as in #13165)
      * advise that it should only be used without HTTP/1.1 pipelining (use HTTP/2
        or non-idempotent browsers). Not that browsers actually use pipelining.
      
      The main implementation change is that each Handler now gets its own
      CloseNotifier channel value, rather than sharing one between the whole
      conn. This means Handlers can't affect subsequent requests. This is
      how HTTP/2's Server works too. The old docs never clarified a behavior
      either way. The other side effect of each request getting its own
      CloseNotifier channel is that one handler can't "poison" the
      underlying conn preventing subsequent requests on the same connection
      from using CloseNotifier (this is #9763).
      
      In the old implementation, once any request on a connection used
      ClosedNotifier, the conn's underlying bufio.Reader source was switched
      from the TCPConn to the read side of the pipe being fed by a
      never-ending copy. Since it was impossible to abort that never-ending
      copy, we could never get back to a fresh state where it was possible
      to return the underlying TCPConn to callers of Hijack. Now, instead of
      a never-ending Copy, the background goroutine doing a Read from the
      TCPConn (or *tls.Conn) only reads a single byte. That single byte
      can be in the request body, a socket timeout error, io.EOF error, or
      the first byte of the second body. In any case, the new *connReader
      type stitches sync and async reads together like an io.MultiReader. To
      clarify the flow of Read data and combat the complexity of too many
      wrapper Reader types, the *connReader absorbs the io.LimitReader
      previously used for bounding request header reads.  The
      liveSwitchReader type is removed. (an unused switchWriter type is also
      removed)
      
      Many fields on *conn are also documented more fully.
      
      Fixes #9763 (CloseNotify + Hijack together)
      Fixes #13165 (deadlock with CloseNotify + pipelined requests)
      
      Change-Id: I40abc0a1992d05b294d627d1838c33cbccb9dd65
      Reviewed-on: https://go-review.googlesource.com/17750Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      99fb1919
    • Austin Clements's avatar
      runtime: only trigger forced GC if GC is not running · 01baf13b
      Austin Clements authored
      Currently, sysmon triggers a forced GC solely based on
      memstats.last_gc. However, memstats.last_gc isn't updated until mark
      termination, so once sysmon starts triggering forced GC, it will keep
      triggering them until GC finishes. The first of these actually starts
      a GC; the remainder up to the last print "GC forced", but gcStart
      returns immediately because gcphase != _GCoff; then the last may start
      another GC if the previous GC finishes (and sets last_gc) between
      sysmon triggering it and gcStart checking the GC phase.
      
      Fix this by expanding the condition for starting a forced GC to also
      require that no GC is currently running. This, combined with the way
      forcegchelper blocks until the GC cycle is started, ensures sysmon
      only starts one GC when the time exceeds the forced GC threshold.
      
      Fixes #13458.
      
      Change-Id: Ie6cf841927f6085136be3f45259956cd5cf10d23
      Reviewed-on: https://go-review.googlesource.com/17819
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      01baf13b
    • Austin Clements's avatar
      runtime: simplify sigprof traceback interlocking · 50d8d4e8
      Austin Clements authored
      The addition of stack barrier locking to copystack subsumes the
      partial fix from commit bbd1a1c7 for SIGPROF during copystack. With the
      stack barrier locking, this commit simplifies the rule in sigprof to:
      the user stack can be traced only if sigprof can acquire the stack
      barrier lock.
      
      Updates #12932, #13362.
      
      Change-Id: I1c1f80015053d0ac7761e9e0c7437c2aba26663f
      Reviewed-on: https://go-review.googlesource.com/17192
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      50d8d4e8
    • Matthew Dempsky's avatar
      cmd/compile: change dead code into assert · 22a204dd
      Matthew Dempsky authored
      After fixing #13587, I noticed that the "OAS2FUNC in disguise" block
      looked like it probably needed write barriers too.  However, testing
      revealed the multi-value "return f()" case was already being handled
      correctly.
      
      It turns out this block is dead code due to "return f()" already being
      transformed into "t1, t2, ..., tN := f(); return t1, t2, ..., tN" by
      orderstmt when f is a multi-valued function.
      
      Updates #13587.
      
      Change-Id: Icde46dccc55beda2ea5fd5fcafc9aae26cec1552
      Reviewed-on: https://go-review.googlesource.com/17759
      Run-TryBot: Matthew Dempsky <mdempsky@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarKeith Randall <khr@golang.org>
      22a204dd
    • Austin Clements's avatar
      runtime: update triggerRatio in setGCPercent · 2bacae81
      Austin Clements authored
      Currently, runtime/debug.SetGCPercent does not adjust the controller
      trigger ratio. As a result, runtime reductions of GOGC don't take full
      effect until after one more concurrent cycle has happened, which
      adjusts the trigger ratio to account for the new gcpercent.
      
      Fix this by lowering the trigger ratio if necessary in setGCPercent.
      
      Change-Id: I4d23e0c58d91939b86ac60fa5d53ef91d0d89e0c
      Reviewed-on: https://go-review.googlesource.com/17813
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      2bacae81
    • Austin Clements's avatar
      runtime: print gctrace before releasing worldsema · 1e1ea669
      Austin Clements authored
      Currently we drop worldsema and then print the gctrace. We did this so
      that if stderr is a pipe or a blocked terminal, blocking on printing
      the gctrace would not block another GC from starting. However, this is
      a bit of a fool's errand because a blocked runtime print will block
      the whole M/P, so after GOMAXPROCS GC cycles, the whole system will
      freeze. Furthermore, now this is much less of an issue because
      allocation will block indefinitely if it can't start a GC (whereas it
      used to be that allocation could run away). Finally, this allows
      another GC cycle to start while the previous cycle is printing the
      gctrace, which leads to races on reading various statistics to print
      them and the next GC cycle overwriting those statistics.
      
      Fix this by moving the release of worldsema after the gctrace print.
      
      Change-Id: I3d044ea0f77d80f3b4050af6b771e7912258662a
      Reviewed-on: https://go-review.googlesource.com/17812
      Run-TryBot: Austin Clements <austin@google.com>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      1e1ea669
    • Austin Clements's avatar
      runtime: reset sweep stats before starting the world · ff5c9453
      Austin Clements authored
      Currently we reset the sweep stats just after gcMarkTermination starts
      the world and releases worldsema. However, background sweeping can
      start the moment we start the world and, in fact, pause sweeping can
      start the moment we release worldsema (because another GC cycle can
      start up), so these need to be cleared before starting the world.
      
      Change-Id: I95701e3de6af76bb3fbf2ee65719985bf57d20b2
      Reviewed-on: https://go-review.googlesource.com/17811
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      ff5c9453
    • Austin Clements's avatar
      runtime: fix (sometimes major) underestimation of heap_live · 87d939de
      Austin Clements authored
      Currently, we update memstats.heap_live from mcache.local_cachealloc
      whenever we lock the heap (e.g., to obtain a fresh span or to release
      an unused span). However, under the right circumstances,
      local_cachealloc can accumulate allocations up to the size of
      the *entire heap* without flushing them to heap_live. Specifically,
      since span allocations from an mcentral don't lock the heap, if a
      large number of pages are held in an mcentral and the application
      continues to use and free objects of that size class (e.g., the
      BinaryTree17 benchmark), local_cachealloc won't be flushed until the
      mcentral runs out of spans.
      
      This is a problem because, unlike many of the memory statistics that
      are purely informative, heap_live is used to determine when the
      garbage collector should start and how hard it should work.
      
      This commit eliminates local_cachealloc, instead atomically updating
      heap_live directly. To control contention, we do this only when
      obtaining a span from an mcentral. Furthermore, we make heap_live
      conservative: allocating a span assumes that all free slots in that
      span will be used and accounts for these when the span is
      allocated, *before* the objects themselves are. This is important
      because 1) this triggers the GC earlier than necessary rather than
      potentially too late and 2) this leads to a conservative GC rate
      rather than a GC rate that is potentially too low.
      
      Alternatively, we could have flushed local_cachealloc when it passed
      some threshold, but this would require determining a threshold and
      would cause heap_live to underestimate the true value rather than
      overestimate.
      
      Fixes #12199.
      
      name                      old time/op    new time/op    delta
      BinaryTree17-12              2.88s ± 4%     2.88s ± 1%    ~     (p=0.470 n=19+19)
      Fannkuch11-12                2.48s ± 1%     2.48s ± 1%    ~     (p=0.243 n=16+19)
      FmtFprintfEmpty-12          50.9ns ± 2%    50.7ns ± 1%    ~     (p=0.238 n=15+14)
      FmtFprintfString-12          175ns ± 1%     171ns ± 1%  -2.48%  (p=0.000 n=18+18)
      FmtFprintfInt-12             159ns ± 1%     158ns ± 1%  -0.78%  (p=0.000 n=19+18)
      FmtFprintfIntInt-12          270ns ± 1%     265ns ± 2%  -1.67%  (p=0.000 n=18+18)
      FmtFprintfPrefixedInt-12     235ns ± 1%     234ns ± 0%    ~     (p=0.362 n=18+19)
      FmtFprintfFloat-12           309ns ± 1%     308ns ± 1%  -0.41%  (p=0.001 n=18+19)
      FmtManyArgs-12              1.10µs ± 1%    1.08µs ± 0%  -1.96%  (p=0.000 n=19+18)
      GobDecode-12                7.81ms ± 1%    7.80ms ± 1%    ~     (p=0.425 n=18+19)
      GobEncode-12                6.53ms ± 1%    6.53ms ± 1%    ~     (p=0.817 n=19+19)
      Gzip-12                      312ms ± 1%     312ms ± 2%    ~     (p=0.967 n=19+20)
      Gunzip-12                   42.0ms ± 1%    41.9ms ± 1%    ~     (p=0.172 n=19+19)
      HTTPClientServer-12         63.7µs ± 1%    63.8µs ± 1%    ~     (p=0.639 n=19+19)
      JSONEncode-12               16.4ms ± 1%    16.4ms ± 1%    ~     (p=0.954 n=19+19)
      JSONDecode-12               58.5ms ± 1%    57.8ms ± 1%  -1.27%  (p=0.000 n=18+19)
      Mandelbrot200-12            3.86ms ± 1%    3.88ms ± 0%  +0.44%  (p=0.000 n=18+18)
      GoParse-12                  3.67ms ± 2%    3.66ms ± 1%  -0.52%  (p=0.001 n=18+19)
      RegexpMatchEasy0_32-12       100ns ± 1%     100ns ± 0%    ~     (p=0.257 n=19+18)
      RegexpMatchEasy0_1K-12       347ns ± 1%     347ns ± 1%    ~     (p=0.527 n=18+18)
      RegexpMatchEasy1_32-12      83.7ns ± 2%    83.1ns ± 2%    ~     (p=0.096 n=18+19)
      RegexpMatchEasy1_1K-12       509ns ± 1%     505ns ± 1%  -0.75%  (p=0.000 n=18+19)
      RegexpMatchMedium_32-12      130ns ± 2%     129ns ± 1%    ~     (p=0.962 n=20+20)
      RegexpMatchMedium_1K-12     39.5µs ± 2%    39.4µs ± 1%    ~     (p=0.376 n=20+19)
      RegexpMatchHard_32-12       2.04µs ± 0%    2.04µs ± 1%    ~     (p=0.195 n=18+17)
      RegexpMatchHard_1K-12       61.4µs ± 1%    61.4µs ± 1%    ~     (p=0.885 n=19+19)
      Revcomp-12                   540ms ± 2%     542ms ± 4%    ~     (p=0.552 n=19+17)
      Template-12                 69.6ms ± 1%    71.2ms ± 1%  +2.39%  (p=0.000 n=20+20)
      TimeParse-12                 357ns ± 1%     357ns ± 1%    ~     (p=0.883 n=18+20)
      TimeFormat-12                379ns ± 1%     362ns ± 1%  -4.53%  (p=0.000 n=18+19)
      [Geo mean]                  62.0µs         61.8µs       -0.44%
      
      name              old time/op  new time/op  delta
      XBenchGarbage-12  5.89ms ± 2%  5.81ms ± 2%  -1.41%  (p=0.000 n=19+18)
      
      Change-Id: I96b31cca6ae77c30693a891cff3fe663fa2447a0
      Reviewed-on: https://go-review.googlesource.com/17748
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      87d939de
    • Austin Clements's avatar
      runtime: trace sweep completion in gcpacertrace mode · 4ad64cad
      Austin Clements authored
      Change-Id: I7991612e4d064c15492a39c19f753df1db926203
      Reviewed-on: https://go-review.googlesource.com/17747
      Run-TryBot: Austin Clements <austin@google.com>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      4ad64cad
    • Austin Clements's avatar
      runtime: check for spanBytesAlloc underflow · c1cbe5b5
      Austin Clements authored
      Change-Id: I5e6739ff0c6c561195ed9891fb90f933b81e7750
      Reviewed-on: https://go-review.googlesource.com/17746
      Run-TryBot: Austin Clements <austin@google.com>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      c1cbe5b5