1. 11 May, 2015 3 commits
  2. 10 May, 2015 2 commits
  3. 09 May, 2015 3 commits
  4. 08 May, 2015 6 commits
  5. 07 May, 2015 19 commits
  6. 06 May, 2015 7 commits
    • Shenghou Ma's avatar
      go/build: enable cgo by default on iOS · 4a8dbaa4
      Shenghou Ma authored
      Otherwise misc/cgo/test won't be tested on iOS.
      
      Change-Id: I7ee78c825b0bb092c7a8b2c2ece5a6eda2f6cf95
      Reviewed-on: https://go-review.googlesource.com/9643Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      4a8dbaa4
    • Ian Lance Taylor's avatar
      cmd/cgo: readability improvements to generated _cgo_export.h · 2f9acc13
      Ian Lance Taylor authored
      Also copy doc comments from Go code to _cgo_export.h.
      
      This is a step toward installing this generated file when using
      -buildmode=c-archive or c-shared, so that C code can #include it.
      
      Change-Id: I3a243f7b386b58ec5c5ddb9a246bb9f9eddc5fb8
      Reviewed-on: https://go-review.googlesource.com/9790Reviewed-by: 's avatarMinux Ma <minux@golang.org>
      Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      2f9acc13
    • Rob Pike's avatar
      cmd/doc: add type-bound vars to global vars list · da4fc529
      Rob Pike authored
      Already done for constants and funcs, but I didn't realize that some
      global vars were also not in the global list. This fixes
      
      	go doc build.Default
      
      Change-Id: I768bde13a400259df3e46dddc9f58c8f0e993c72
      Reviewed-on: https://go-review.googlesource.com/9764Reviewed-by: 's avatarAndrew Gerrand <adg@golang.org>
      da4fc529
    • Rob Pike's avatar
      testing: document that Log and Logf always print in benchmarks · e9827f62
      Rob Pike authored
      Fixes #10713.
      
      Change-Id: Ifdafc340ae3bba751236f0482246c568346a569c
      Reviewed-on: https://go-review.googlesource.com/9763Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      e9827f62
    • Austin Clements's avatar
      runtime: use heap scan size as estimate of GC scan work · 17db6e04
      Austin Clements authored
      Currently, the GC uses a moving average of recent scan work ratios to
      estimate the total scan work required by this cycle. This is in turn
      used to compute how much scan work should be done by mutators when
      they allocate in order to perform all expected scan work by the time
      the allocated heap reaches the heap goal.
      
      However, our current scan work estimate can be arbitrarily wrong if
      the heap topography changes significantly from one cycle to the
      next. For example, in the go1 benchmarks, at the beginning of each
      benchmark, the heap is dominated by a 256MB no-scan object, so the GC
      learns that the scan density of the heap is very low. In benchmarks
      that then rapidly allocate pointer-dense objects, by the time of the
      next GC cycle, our estimate of the scan work can be too low by a large
      factor. This in turn lets the mutator allocate faster than the GC can
      collect, allowing it to get arbitrarily far ahead of the scan work
      estimate, which leads to very long GC cycles with very little mutator
      assist that can overshoot the heap goal by large margins. This is
      particularly easy to demonstrate with BinaryTree17:
      
      $ GODEBUG=gctrace=1 ./go1.test -test.bench BinaryTree17
      gc #1 @0.017s 2%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 4->262->262 MB, 4 MB goal, 1 P
      gc #2 @0.026s 3%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 262->262->262 MB, 524 MB goal, 1 P
      testing: warning: no tests to run
      PASS
      BenchmarkBinaryTree17	gc #3 @1.906s 0%: 0+0+0+0+7 ms clock, 0+0+0+0/0/0+7 ms cpu, 325->325->287 MB, 325 MB goal, 1 P (forced)
      gc #4 @12.203s 20%: 0+0+0+10067+10 ms clock, 0+0+0+0/2523/852+10 ms cpu, 430->2092->1950 MB, 574 MB goal, 1 P
             1       9150447353 ns/op
      
      Change this estimate to instead use the *current* scannable heap
      size. This has the advantage of being based solely on the current
      state of the heap, not on past densities or reachable heap sizes, so
      it isn't susceptible to falling behind during these sorts of phase
      changes. This is strictly an over-estimate, but it's better to
      over-estimate and get more assist than necessary than it is to
      under-estimate and potentially spiral out of control. Experiments with
      scaling this estimate back showed no obvious benefit for mutator
      utilization, heap size, or assist time.
      
      This new estimate has little effect for most benchmarks, including
      most go1 benchmarks, x/benchmarks, and the 6g benchmark. It has a huge
      effect for benchmarks that triggered the bad pacer behavior:
      
      name                   old mean              new mean              delta
      BinaryTree17            10.0s × (1.00,1.00)    3.5s × (0.98,1.01)  -64.93% (p=0.000)
      Fannkuch11              2.74s × (1.00,1.01)   2.65s × (1.00,1.00)   -3.52% (p=0.000)
      FmtFprintfEmpty        56.4ns × (0.99,1.00)  57.8ns × (1.00,1.01)   +2.43% (p=0.000)
      FmtFprintfString        187ns × (0.99,1.00)   185ns × (0.99,1.01)   -1.19% (p=0.010)
      FmtFprintfInt           184ns × (1.00,1.00)   183ns × (1.00,1.00)  (no variance)
      FmtFprintfIntInt        321ns × (1.00,1.00)   315ns × (1.00,1.00)   -1.80% (p=0.000)
      FmtFprintfPrefixedInt   266ns × (1.00,1.00)   263ns × (1.00,1.00)   -1.22% (p=0.000)
      FmtFprintfFloat         353ns × (1.00,1.00)   353ns × (1.00,1.00)   -0.13% (p=0.035)
      FmtManyArgs            1.21µs × (1.00,1.00)  1.19µs × (1.00,1.00)   -1.33% (p=0.000)
      GobDecode              9.69ms × (1.00,1.00)  9.59ms × (1.00,1.00)   -1.07% (p=0.000)
      GobEncode              7.89ms × (0.99,1.01)  7.74ms × (1.00,1.00)   -1.92% (p=0.000)
      Gzip                    391ms × (1.00,1.00)   392ms × (1.00,1.00)     ~    (p=0.522)
      Gunzip                 97.1ms × (1.00,1.00)  97.0ms × (1.00,1.00)   -0.10% (p=0.000)
      HTTPClientServer       55.7µs × (0.99,1.01)  56.7µs × (0.99,1.01)   +1.81% (p=0.001)
      JSONEncode             19.1ms × (1.00,1.00)  19.0ms × (1.00,1.00)   -0.85% (p=0.000)
      JSONDecode             66.8ms × (1.00,1.00)  66.9ms × (1.00,1.00)     ~    (p=0.288)
      Mandelbrot200          4.13ms × (1.00,1.00)  4.12ms × (1.00,1.00)   -0.08% (p=0.000)
      GoParse                3.97ms × (1.00,1.01)  4.01ms × (1.00,1.00)   +0.99% (p=0.000)
      RegexpMatchEasy0_32     114ns × (1.00,1.00)   115ns × (0.99,1.00)     ~    (p=0.070)
      RegexpMatchEasy0_1K     376ns × (1.00,1.00)   376ns × (1.00,1.00)     ~    (p=0.900)
      RegexpMatchEasy1_32    94.9ns × (1.00,1.00)  96.3ns × (1.00,1.01)   +1.53% (p=0.001)
      RegexpMatchEasy1_1K     568ns × (1.00,1.00)   567ns × (1.00,1.00)   -0.22% (p=0.001)
      RegexpMatchMedium_32    159ns × (1.00,1.00)   159ns × (1.00,1.00)     ~    (p=0.178)
      RegexpMatchMedium_1K   46.4µs × (1.00,1.00)  46.6µs × (1.00,1.00)   +0.29% (p=0.000)
      RegexpMatchHard_32     2.37µs × (1.00,1.00)  2.37µs × (1.00,1.00)     ~    (p=0.722)
      RegexpMatchHard_1K     71.1µs × (1.00,1.00)  71.2µs × (1.00,1.00)     ~    (p=0.229)
      Revcomp                 565ms × (1.00,1.00)   562ms × (1.00,1.00)   -0.52% (p=0.000)
      Template               81.0ms × (1.00,1.00)  80.2ms × (1.00,1.00)   -0.97% (p=0.000)
      TimeParse               380ns × (1.00,1.00)   380ns × (1.00,1.00)     ~    (p=0.148)
      TimeFormat              405ns × (0.99,1.00)   385ns × (0.99,1.00)   -5.00% (p=0.000)
      
      Change-Id: I11274158bf3affaf62662e02de7af12d5fb789e4
      Reviewed-on: https://go-review.googlesource.com/9696Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Run-TryBot: Austin Clements <austin@google.com>
      17db6e04
    • Austin Clements's avatar
      runtime: track "scannable" bytes of heap · 3be3cbd5
      Austin Clements authored
      This tracks the number of scannable bytes in the allocated heap. That
      is, bytes that the garbage collector must scan before reaching the
      last pointer field in each object.
      
      This will be used to compute a more robust estimate of the GC scan
      work.
      
      Change-Id: I1eecd45ef9cdd65b69d2afb5db5da885c80086bb
      Reviewed-on: https://go-review.googlesource.com/9695Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      3be3cbd5
    • Austin Clements's avatar
      runtime: include scalar slots in GC scan work metric · 53c53984
      Austin Clements authored
      The garbage collector predicts how much "scan work" must be done in a
      cycle to determine how much work should be done by mutators when they
      allocate. Most code doesn't care what units the scan work is in: it
      simply knows that a certain amount of scan work has to be done in the
      cycle. Currently, the GC uses the number of pointer slots scanned as
      the scan work on the theory that this is the bulk of the time spent in
      the garbage collector and hence reflects real CPU resource usage.
      However, this metric is difficult to estimate at the beginning of a
      cycle.
      
      Switch to counting the total number of bytes scanned, including both
      pointer and scalar slots. This is still less than the total marked
      heap since it omits no-scan objects and no-scan tails of objects. This
      metric may not reflect absolute performance as well as the count of
      scanned pointer slots (though it still takes time to scan scalar
      fields), but it will be much easier to estimate robustly, which is
      more important.
      
      Change-Id: Ie3a5eeeb0384a1ca566f61b2f11e9ff3a75ca121
      Reviewed-on: https://go-review.googlesource.com/9694Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      53c53984