1. 11 Nov, 2015 12 commits
    • Austin Clements's avatar
      runtime: replace traceBuf slice with index · f5c42cf8
      Austin Clements authored
      Currently traceBuf keeps track of where it is in the trace buffer by
      also maintaining a slice that points in to this buffer with an initial
      length of 0 and a cap of the length of the array. All writes to this
      buffer are done by appending to the slice (as long as the bounds
      checks are right, it will never overflow and the append won't allocate
      a new slice).
      
      Each of these appends generates a write barrier. As long as we never
      overflow the buffer, this write barrier won't fire, but this wreaks
      havoc with eliminating write barriers from the tracing code. If we
      were to overflow the buffer, this would both allocate and invoke a
      write barrier, both things that are dicey at best to do in many of the
      contexts tracing happens. It also wastes space in the traceBuf and
      leads to more complex code and more complex generated code.
      
      Replace this slice trick with keeping track of a simple array
      position.
      
      Updates #10600.
      
      Change-Id: I0a63eecec1992e195449f414ed47653f66318d0e
      Reviewed-on: https://go-review.googlesource.com/16814
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      f5c42cf8
    • Austin Clements's avatar
      runtime: eliminate traceStack write barriers · 2be1ed80
      Austin Clements authored
      This replaces *traceStack with traceStackPtr, much like the preceding
      commit.
      
      Updates #10600.
      
      Change-Id: Ifadc35eb37a405ae877f9740151fb31a0ca1d08f
      Reviewed-on: https://go-review.googlesource.com/16813
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      2be1ed80
    • Austin Clements's avatar
      runtime: eliminate traceBuf write barriers · 03227bb5
      Austin Clements authored
      The tracing code is currently called from contexts such as sysmon and
      the scheduler where write barriers are not allowed. Unfortunately,
      while the common paths through the tracing code do not have write
      barriers, many of the less common paths dealing with buffer overflow
      and recycling do.
      
      This change replaces all *traceBufs with traceBufPtrs. In the style of
      guintptr, etc., the GC does not trace traceBufPtrs and write barriers
      do not apply when these pointers are written. Since traceBufs are
      allocated from non-GC'd memory and manually managed, this is always
      safe.
      
      Updates #10600.
      
      Change-Id: I52b992d36d1b634ebd855c8cde27947ec14f59ba
      Reviewed-on: https://go-review.googlesource.com/16812
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      03227bb5
    • Brad Fitzpatrick's avatar
      doc: update go1.6.txt · a9a7e406
      Brad Fitzpatrick authored
      Mention shallow clones.
      
      Fixes #13204
      
      Change-Id: I0ed9d4e829d388425beba0d64e6889d16d4bb173
      Reviewed-on: https://go-review.googlesource.com/16822Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      a9a7e406
    • Austin Clements's avatar
      runtime: fix use of xadd64 · 7d1d6429
      Austin Clements authored
      Commit 7407d8e5 was rebased over the switch to runtime/internal/atomic
      and introduced a call to xadd64, which no longer exists. Fix that
      call.
      
      Change-Id: I99c93469794c16504ae4a8ffe3066ac382c66a3a
      Reviewed-on: https://go-review.googlesource.com/16816Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      7d1d6429
    • Austin Clements's avatar
      runtime: fix over-aggressive proportional sweep · 7407d8e5
      Austin Clements authored
      Currently, sweeping is performed before allocating a span by charging
      for the entire size of the span requested, rather than the number of
      bytes actually available for allocation from the returned span. That
      is, if the returned span is 8K, but already has 6K in use, the mutator
      is charged for 8K of heap allocation even though it can only allocate
      2K more from the span. As a result, proportional sweep is
      over-aggressive and tends to finish much earlier than it needs to.
      This effect is more amplified by fragmented heaps.
      
      Fix this by reimbursing the mutator for the used space in a span once
      it has allocated that span. We still have to charge up-front for the
      worst-case because we don't know which span the mutator will get, but
      at least we can correct the over-charge once it has a span, which will
      go toward later span allocations.
      
      This has negligible effect on the throughput of the go1 benchmarks and
      the garbage benchmark.
      
      Fixes #12040.
      
      Change-Id: I0e23e7a4ccf126cca000fed5067b20017028dd6b
      Reviewed-on: https://go-review.googlesource.com/16515Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      7407d8e5
    • Meng Zhuo's avatar
      cmd/go: use shallow clones for new git checkouts · bc1f9d20
      Meng Zhuo authored
      Currently go get will clone the full history of git repos.
      We can improve the download waiting time/size by passing depth argument.
      
      The docs about shallow clones and the --depth argument are here:
      https://git-scm.com/docs/git-clone
      https://git-scm.com/docs/git-pull
      
      Fixes #13078
      
      Change-Id: Ie891d905d9c77f6ecadf7dcd5b44b477f4e079e0
      Reviewed-on: https://go-review.googlesource.com/16360Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      bc1f9d20
    • Ian Lance Taylor's avatar
      runtime: don't call msanread when running on the system stack · 880a6891
      Ian Lance Taylor authored
      The runtime is not instrumented, but the calls to msanread in the
      runtime can sometimes refer to the system stack.  An example is the call
      to copy in stkbucket in mprof.go.  Depending on what C code has done,
      the system stack may appear uninitialized to msan.
      
      Change-Id: Ic21705b9ac504ae5cf7601a59189302f072e7db1
      Reviewed-on: https://go-review.googlesource.com/16660Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      880a6891
    • Ian Lance Taylor's avatar
      runtime: mark cgo callback results as written for msan · 8f3f2cca
      Ian Lance Taylor authored
      This is a fix for the -msan option when using cgo callbacks.  A cgo
      callback works by writing out C code that puts a struct on the stack and
      passes the address of that struct into Go.  The result parameters are
      fields of the struct.  The Go code will write to the result parameters,
      but the Go code thinks it is just writing into the Go stack, and
      therefore won't call msanwrite.  This CL adds a call to msanwrite in the
      cgo callback code so that the C knows that results were written.
      
      Change-Id: I80438dbd4561502bdee97fad3f02893a06880ee1
      Reviewed-on: https://go-review.googlesource.com/16611Reviewed-by: 's avatarDavid Crawshaw <crawshaw@golang.org>
      8f3f2cca
    • Austin Clements's avatar
      runtime: clean up park messages · f84420c2
      Austin Clements authored
      This changes "mark worker (idle)" to "GC worker (idle)" so it's more
      clear to users that these goroutines are GC-related. It changes "GC
      assist" to "GC assist wait" to make it clear that the assist is
      blocked.
      
      Change-Id: Iafbc0903c84f9250ff6bee14baac6fcd4ed5ef76
      Reviewed-on: https://go-review.googlesource.com/16511Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      f84420c2
    • Austin Clements's avatar
      runtime: free stack spans outside STW · 56ad88b1
      Austin Clements authored
      We couldn't do this before this point because it must be done before
      the next GC cycle starts. Hence, if it delayed the start of the next
      cycle, that would widen the window between reaching the heap trigger
      of the next cycle and starting the next GC cycle, during which the
      mutator could over-allocate. With the decentralized GC, any mutators
      that reach the heap trigger will block on the GC starting, so it's
      safe to widen the time between starting the world and being able to
      start the next GC cycle.
      
      Fixes #11465.
      
      Change-Id: Ic7ea7e9eba5b66fc050299f843a9c9001ad814aa
      Reviewed-on: https://go-review.googlesource.com/16394Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      56ad88b1
    • Ian Lance Taylor's avatar
      misc/cgo/test: disable Test10303 for gccgo · d841860f
      Ian Lance Taylor authored
      When using gccgo it's OK if a pointer passed to C remains on the stack.
      Gccgo does not have the clear distinction between C and Go stacks.
      
      Change-Id: I3af9dd6fe078214ab16d9d8dad2d206608d7891d
      Reviewed-on: https://go-review.googlesource.com/16774
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarMinux Ma <minux@golang.org>
      d841860f
  2. 10 Nov, 2015 17 commits
  3. 09 Nov, 2015 6 commits
  4. 08 Nov, 2015 5 commits