1. 08 Apr, 2015 7 commits
  2. 07 Apr, 2015 24 commits
  3. 06 Apr, 2015 9 commits
    • Josh Bleecher Snyder's avatar
      doc/go1.5.txt: add Reader.Size to bytes and strings · ee54d571
      Josh Bleecher Snyder authored
      Change-Id: Idd42e0f5c6ed55be2e153ac83022439e5272c1a7
      Reviewed-on: https://go-review.googlesource.com/8444Reviewed-by: 's avatarJosh Bleecher Snyder <josharian@gmail.com>
      ee54d571
    • David Crawshaw's avatar
      runtime: add _rt0_arm_android_lib · ede863c6
      David Crawshaw authored
      At the moment this function does nothing, runtime initialization is
      still done in android.c:init_go_runtime.
      
      Fixes #10358
      
      Change-Id: I1d762383ba61efcbcf0bbc7c77895f5c1dbf8968
      Reviewed-on: https://go-review.googlesource.com/8510Reviewed-by: 's avatarHyang-Ah Hana Kim <hyangah@gmail.com>
      ede863c6
    • Rob Pike's avatar
      encoding/gob: change panic into error for corrupt input · e449b570
      Rob Pike authored
      decBuffer.Drop is called using data provided by the user, don't
      panic if it's bogus.
      
      Fixes #10272.
      
      Change-Id: I913ae9c3c45cef509f2b8eb02d1efa87fbd52afa
      Reviewed-on: https://go-review.googlesource.com/8496Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      e449b570
    • Austin Clements's avatar
      runtime: report marked heap size in gctrace · 8c3fc088
      Austin Clements authored
      When the gctrace GODEBUG option is enabled, it will now report three
      heap sizes: the heap size at the beginning of the GC cycle, the heap
      size at the end of the GC cycle before sweeping, and marked heap size,
      which is the amount of heap that will be retained until the next GC
      cycle.
      
      Change-Id: Ie13f8a6d5c609bc9cc47c7555960ab55b37b5f1c
      Reviewed-on: https://go-review.googlesource.com/8430Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      8c3fc088
    • Austin Clements's avatar
      runtime: make next_gc be heap size to trigger GC at · 6d12b178
      Austin Clements authored
      In the STW collector, next_gc was both the heap size to trigger GC at
      as well as the goal heap size.
      
      Early in the concurrent collector's development, next_gc was the goal
      heap size, but was also used as the heap size to trigger GC at. This
      meant we always overshot the goal because of allocation during
      concurrent GC.
      
      Currently, next_gc is still the goal heap size, but we trigger
      concurrent GC at 7/8*GOGC heap growth. This complicates
      shouldtriggergc, but was necessary because of the incremental
      maintenance of next_gc.
      
      Now we simply compute next_gc for the next cycle during mark
      termination. Hence, it's now easy to take the simpler route and
      redefine next_gc as the heap size at which the next GC triggers. We
      can directly compute this with the 7/8 backoff during mark termination
      and shouldtriggergc can simply test if the live heap size has grown
      over the next_gc trigger.
      
      This will also simplify later changes once we start setting next_gc in
      more sophisticated ways.
      
      Change-Id: I872be4ae06b4f7a0d7f7967360a054bd36b90eea
      Reviewed-on: https://go-review.googlesource.com/8420Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      6d12b178
    • Austin Clements's avatar
      runtime: introduce heap_live; replace use of heap_alloc in GC · d7e0ad4b
      Austin Clements authored
      Currently there are two main consumers of memstats.heap_alloc:
      updatememstats (aka ReadMemStats) and shouldtriggergc.
      
      updatememstats recomputes heap_alloc from the ground up, so we don't
      need to keep heap_alloc up to date for it. shouldtriggergc wants to
      know how many bytes were marked by the previous GC plus how many bytes
      have been allocated since then, but this *isn't* what heap_alloc
      tracks. heap_alloc also includes objects that are not marked and
      haven't yet been swept.
      
      Introduce a new memstat called heap_live that actually tracks what
      shouldtriggergc wants to know and stop keeping heap_alloc up to date.
      
      Unlike heap_alloc, heap_live follows a simple sawtooth that drops
      during each mark termination and increases monotonically between GCs.
      heap_alloc, on the other hand, has much more complicated behavior: it
      may drop during sweep termination, slowly decreases from background
      sweeping between GCs, is roughly unaffected by allocation as long as
      there are unswept spans (because we sweep and allocate at the same
      rate), and may go up after background sweeping is done depending on
      the GC trigger.
      
      heap_live simplifies computing next_gc and using it to figure out when
      to trigger garbage collection. Currently, we guess next_gc at the end
      of a cycle and update it as we sweep and get a better idea of how much
      heap was marked. Now, since we're directly tracking how much heap is
      marked, we can directly compute next_gc.
      
      This also corrects bugs that could cause us to trigger GC early.
      Currently, in any case where sweep termination actually finds spans to
      sweep, heap_alloc is an overestimation of live heap, so we'll trigger
      GC too early. heap_live, on the other hand, is unaffected by sweeping.
      
      Change-Id: I1f96807b6ed60d4156e8173a8e68745ffc742388
      Reviewed-on: https://go-review.googlesource.com/8389Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      d7e0ad4b
    • Austin Clements's avatar
      runtime: track heap bytes marked by GC · 50a66562
      Austin Clements authored
      This tracks the number of heap bytes marked by a GC cycle. We'll use
      this information to precisely trigger the next GC cycle.
      
      Currently this aggregates the work counter in gcWork and dispose
      atomically aggregates this into a global work counter. dispose happens
      relatively infrequently, so the contention on the global counter
      should be low. If this turns out to be an issue, we can reduce the
      number of disposes, and if it's still a problem, we can switch to
      per-P counters.
      
      Change-Id: I1bc377cb2e802ef61c2968602b63146d52e7f5db
      Reviewed-on: https://go-review.googlesource.com/8388Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      50a66562
    • Rob Pike's avatar
      cmd/asm/internal/asm: fix comment in ppc64.go · dfc9e264
      Rob Pike authored
      It referred to the wrong architecture.
      
      Fixes #10355.
      
      Change-Id: I5b9d31c9f04f3106b93f94fa68c848b2518b128e
      Reviewed-on: https://go-review.googlesource.com/8495Reviewed-by: 's avatarDave Cheney <dave@cheney.net>
      dfc9e264
    • Robert Griesemer's avatar
      cmd/internal/gc/big: update vendored version of math/big · da5ebecc
      Robert Griesemer authored
      This fixes the formerly extremely slow conversion of floating-point
      constants with large exponents (e.g., "const c = 1e1000000000" could
      stall the machine).
      
      Change-Id: I36e02158e3334d32b18743ec0c259fec77baa74f
      Reviewed-on: https://go-review.googlesource.com/8466Reviewed-by: 's avatarAlan Donovan <adonovan@google.com>
      da5ebecc