1. 12 Sep, 2014 15 commits
  2. 11 Sep, 2014 11 commits
    • Keith Randall's avatar
      runtime: make gostringnocopy update maxstring · bcd36e88
      Keith Randall authored
      Fixes #8706
      
      LGTM=josharian
      R=josharian
      CC=golang-codereviews
      https://golang.org/cl/143880043
      bcd36e88
    • Matthew Dempsky's avatar
      doc: link directly to https://golang.org/dl/ · 6e55f7a8
      Matthew Dempsky authored
      Fixes #8705.
      
      LGTM=adg
      R=golang-codereviews, bradfitz, adg
      CC=golang-codereviews
      https://golang.org/cl/142890044
      6e55f7a8
    • Keith Randall's avatar
      runtime: get rid of copyable check - all G frames are copyable. · 00365b13
      Keith Randall authored
      Just go ahead and do it, if something is wrong we'll throw.
      
      Also rip out cc-generated arg ptr maps, they are useless now.
      
      LGTM=rsc
      R=rsc
      CC=golang-codereviews
      https://golang.org/cl/133690045
      00365b13
    • Russ Cox's avatar
      runtime: make Gosched nosplit · 91baf5c6
      Russ Cox authored
      Replacing gosched with Gosched broke some builds because
      some of the call sites are at times when the stack cannot be grown.
      
      TBR=khr
      CC=golang-codereviews
      https://golang.org/cl/142000043
      91baf5c6
    • Russ Cox's avatar
      runtime: move gosched to Go, to add stack frame information · 15a5c35c
      Russ Cox authored
      LGTM=khr
      R=khr
      CC=golang-codereviews
      https://golang.org/cl/134520044
      15a5c35c
    • Robert Griesemer's avatar
      go/printer, gofmt: don't align map entries for irregular inputs · 724fa12f
      Robert Griesemer authored
      Details: Until now, when we saw a key:value pair that fit onto
      a single line, we assumed that it should be formatted with a
      vtab after the ':' for alignment of its value. This leads to
      odd behavior if there are more than one such pair on a line.
      This CL changes the behavior such that alignment is only used
      for the first pair on a line. This preserves existing behavior
      (in the std lib we have composite literals where the last line
      contains multiple entries and the first entry's value is aligned
      with the values on previous lines), and resolves this issue.
      
      No impact on formatting of std lib, go.tools, go.exp, go.net.
      
      Fixes #8685.
      
      LGTM=adonovan
      R=adonovan
      CC=golang-codereviews
      https://golang.org/cl/139430043
      724fa12f
    • Josh Bleecher Snyder's avatar
      test: return errors earlier in run.go · 8cc6cb2f
      Josh Bleecher Snyder authored
      Fixes #8184.
      
      LGTM=bradfitz
      R=bradfitz
      CC=golang-codereviews
      https://golang.org/cl/137510043
      8cc6cb2f
    • Russ Cox's avatar
      cmd/gc: emit write barriers · fcb4cabb
      Russ Cox authored
      A write *p = x that needs a write barrier (not all do)
      now turns into runtime.writebarrierptr(p, x)
      or one of the other variants.
      
      The write barrier implementations are trivial.
      The goal here is to emit the calls in the correct places
      and to incur the cost of those function calls in the Go 1.4 cycle.
      
      Performance on the Go 1 benchmark suite below.
      Remember, the goal is to slow things down (and be correct).
      
      We will look into optimizations in separate CLs, as part of
      the process of comparing Go 1.3 against tip in order to make
      sure Go 1.4 runs at least as fast as Go 1.3.
      
      benchmark                          old ns/op      new ns/op      delta
      BenchmarkBinaryTree17              3118336716     3452876110     +10.73%
      BenchmarkFannkuch11                3184497677     3211552284     +0.85%
      BenchmarkFmtFprintfEmpty           89.9           107            +19.02%
      BenchmarkFmtFprintfString          236            287            +21.61%
      BenchmarkFmtFprintfInt             246            278            +13.01%
      BenchmarkFmtFprintfIntInt          395            458            +15.95%
      BenchmarkFmtFprintfPrefixedInt     343            378            +10.20%
      BenchmarkFmtFprintfFloat           477            525            +10.06%
      BenchmarkFmtManyArgs               1446           1707           +18.05%
      BenchmarkGobDecode                 14398047       14685958       +2.00%
      BenchmarkGobEncode                 12557718       12947104       +3.10%
      BenchmarkGzip                      453462345      472413285      +4.18%
      BenchmarkGunzip                    114226016      115127398      +0.79%
      BenchmarkHTTPClientServer          114689         112122         -2.24%
      BenchmarkJSONEncode                24914536       26135942       +4.90%
      BenchmarkJSONDecode                86832877       103620289      +19.33%
      BenchmarkMandelbrot200             4833452        4898780        +1.35%
      BenchmarkGoParse                   4317976        4835474        +11.98%
      BenchmarkRegexpMatchEasy0_32       150            166            +10.67%
      BenchmarkRegexpMatchEasy0_1K       393            402            +2.29%
      BenchmarkRegexpMatchEasy1_32       125            142            +13.60%
      BenchmarkRegexpMatchEasy1_1K       1010           1236           +22.38%
      BenchmarkRegexpMatchMedium_32      232            301            +29.74%
      BenchmarkRegexpMatchMedium_1K      76963          102721         +33.47%
      BenchmarkRegexpMatchHard_32        3833           5463           +42.53%
      BenchmarkRegexpMatchHard_1K        119668         161614         +35.05%
      BenchmarkRevcomp                   763449047      706768534      -7.42%
      BenchmarkTemplate                  124954724      134834549      +7.91%
      BenchmarkTimeParse                 517            511            -1.16%
      BenchmarkTimeFormat                501            514            +2.59%
      
      benchmark                         old MB/s     new MB/s     speedup
      BenchmarkGobDecode                53.31        52.26        0.98x
      BenchmarkGobEncode                61.12        59.28        0.97x
      BenchmarkGzip                     42.79        41.08        0.96x
      BenchmarkGunzip                   169.88       168.55       0.99x
      BenchmarkJSONEncode               77.89        74.25        0.95x
      BenchmarkJSONDecode               22.35        18.73        0.84x
      BenchmarkGoParse                  13.41        11.98        0.89x
      BenchmarkRegexpMatchEasy0_32      213.30       191.72       0.90x
      BenchmarkRegexpMatchEasy0_1K      2603.92      2542.74      0.98x
      BenchmarkRegexpMatchEasy1_32      254.00       224.93       0.89x
      BenchmarkRegexpMatchEasy1_1K      1013.53      827.98       0.82x
      BenchmarkRegexpMatchMedium_32     4.30         3.31         0.77x
      BenchmarkRegexpMatchMedium_1K     13.30        9.97         0.75x
      BenchmarkRegexpMatchHard_32       8.35         5.86         0.70x
      BenchmarkRegexpMatchHard_1K       8.56         6.34         0.74x
      BenchmarkRevcomp                  332.92       359.62       1.08x
      BenchmarkTemplate                 15.53        14.39        0.93x
      
      LGTM=rlh
      R=rlh
      CC=dvyukov, golang-codereviews, iant, khr, r
      https://golang.org/cl/136380043
      fcb4cabb
    • Russ Cox's avatar
      runtime: allow crash from gsignal stack · 1d550b87
      Russ Cox authored
      The uses of onM in dopanic/startpanic are okay even from the signal stack.
      
      Fixes #8666.
      
      LGTM=khr
      R=khr
      CC=golang-codereviews
      https://golang.org/cl/134710043
      1d550b87
    • Mikio Hara's avatar
      net: fix inconsistent behavior across platforms in SetKeepAlivePeriod · f9567401
      Mikio Hara authored
      The previous implementation used per-socket TCP keepalive options
      wrong. For example, it used another level socket option to control
      TCP and it didn't use TCP_KEEPINTVL option when possible.
      
      Fixes #8683.
      Fixes #8701.
      Update #8679
      
      LGTM=iant
      R=golang-codereviews, iant
      CC=golang-codereviews
      https://golang.org/cl/136480043
      f9567401
    • Keith Randall's avatar
      runtime: add timing test for iterate/delete map idiom. · 689dc60c
      Keith Randall authored
      LGTM=bradfitz, iant
      R=iant, bradfitz
      CC=golang-codereviews
      https://golang.org/cl/140510043
      689dc60c
  3. 10 Sep, 2014 4 commits
    • Keith Randall's avatar
      reflect: use runtime's memmove instead of its own · b78d7b75
      Keith Randall authored
      They will both need write barriers at some point.
      But until then, no reason why we shouldn't share.
      
      LGTM=rsc
      R=golang-codereviews, rsc
      CC=golang-codereviews
      https://golang.org/cl/141330043
      b78d7b75
    • Anthony Martin's avatar
      runtime: stop plan9/amd64 build from crashing · 2302b21b
      Anthony Martin authored
      LGTM=iant
      R=rsc, 0intro, alex.brainman, iant
      CC=golang-codereviews
      https://golang.org/cl/140460044
      2302b21b
    • Matthew Dempsky's avatar
      runtime: cleanup openbsd semasleep implementation · d955dfb0
      Matthew Dempsky authored
      The previous implementation had several subtle issues.  It's not
      clear if any of these could actually be causing the flakiness
      problems on openbsd/386, but fixing them should only help.
      
      1. thrsleep() is implemented internally as unlock, then test *abort
      (if abort != nil), then tsleep().  Under the current code, that makes
      it theoretically possible that semasleep()/thrsleep() could release
      waitsemalock, then a racing semawakeup() could acquire the lock,
      increment waitsemacount, and call thrwakeup()/wakeup() before
      thrsleep() reaches tsleep().  (In practice, OpenBSD's big kernel lock
      seems unlikely to let this actually happen.)
      
      The proper way to avoid this is to pass &waitsemacount as the abort
      pointer to thrsleep so thrsleep knows to re-check it before going to
      sleep, and to wakeup if it's non-zero.  Then we avoid any races.
      (I actually suspect openbsd's sema{sleep,wakeup}() could be further
      simplified using cas/xadd instead of locks, but I don't want to be
      more intrusive than necessary so late in the 1.4 release cycle.)
      
      2. semasleep() takes a relative sleep duration, but thrsleep() needs
      an absolute sleep deadline.  Instead of recomputing the deadline each
      iteration, compute it once up front and use (*Timespec)(nil) to signify
      no deadline.  Ensures we retry properly if there's a spurious wakeup.
      
      3. Instead of assuming if thrsleep() woke up and waitsemacount wasn't
      available that we must have hit the deadline, check that the system
      call returned EWOULDBLOCK.
      
      4. Instead of assuming that 64-bit systems are little-endian, compute
      timediv() using a temporary int32 nsec and then assign it to tv_nsec.
      
      LGTM=iant
      R=jsing, iant
      CC=golang-codereviews
      https://golang.org/cl/137960043
      d955dfb0
    • Anthony Martin's avatar
      runtime: call rfork on scheduler stack on Plan 9 · 9f012e10
      Anthony Martin authored
      A race exists between the parent and child processes after a fork.
      The child needs to access the new M pointer passed as an argument
      but the parent may have already returned and clobbered it.
      
      Previously, we avoided this by saving the necessary data into
      registers before the rfork system call but this isn't guaranteed
      to work because Plan 9 makes no promises about the register state
      after a system call. Only the 386 kernel seems to save them.
      For amd64 and arm, this method won't work.
      
      We eliminate the race by allocating stack space for the scheduler
      goroutines (g0) in the per-process copy-on-write stack segment and
      by only calling rfork on the scheduler stack.
      
      LGTM=aram, 0intro, rsc
      R=aram, 0intro, mischief, rsc
      CC=golang-codereviews
      https://golang.org/cl/110680044
      9f012e10
  4. 09 Sep, 2014 10 commits
    • Keith Randall's avatar
      runtime: more cleanups · 1a5e394a
      Keith Randall authored
      Move timenow thunk into time.s
      Move declarations for generic c/asm services into stubs.go
      
      LGTM=bradfitz
      R=golang-codereviews, bradfitz
      CC=golang-codereviews
      https://golang.org/cl/137360043
      1a5e394a
    • Keith Randall's avatar
      runtime: map iterators: always use intrabucket randomess · 251daf86
      Keith Randall authored
      Fixes #8688
      
      LGTM=rsc
      R=golang-codereviews, bradfitz, rsc, khr
      CC=golang-codereviews
      https://golang.org/cl/135660043
      251daf86
    • Russ Cox's avatar
      runtime: fix plan9/amd64 build? · f9829e92
      Russ Cox authored
      The only thing I can see that is really Plan 9-specific
      is that the stack pointer used for signal handling used
      to have more mapped memory above it.
      Specifically it used to have at most 88 bytes (StackTop),
      so change the allocation of a 40-byte frame to a 128-byte frame.
      
      No idea if this will work, but worth a try.
      
      Note that "fix" here means get it back to timing out
      instead of crashing.
      
      TBR=iant
      CC=golang-codereviews
      https://golang.org/cl/142840043
      f9829e92
    • Russ Cox's avatar
      runtime: fix windows/386 build · ee6c6d96
      Russ Cox authored
      The difference between the old and the new (from earlier) code
      is that we set stackguard = stack.lo + StackGuard, while the old
      code set stackguard = stack.lo. That 512 bytes appears to be
      the difference between the profileloop function running and not running.
      
      We don't know how big the system stack is, but it is likely MUCH bigger than 4k.
      Give Go/C 8k.
      
      TBR=iant
      CC=golang-codereviews
      https://golang.org/cl/140440044
      ee6c6d96
    • Russ Cox's avatar
      runtime: avoid read overrun in heapdump · 16c59acb
      Russ Cox authored
      Start the stack a few words below the actual top, so that
      if something tries to read goexit's caller PC from the stack,
      it won't fault on a bad memory address.
      Today, heapdump does that.
      Maybe tomorrow, traceback or something else will do that.
      Make it not a bug.
      
      TBR=khr
      R=khr
      CC=golang-codereviews
      https://golang.org/cl/136450043
      16c59acb
    • Rob Pike's avatar
      testing: read coverage counters atomically · d33ee0c5
      Rob Pike authored
      For -mode=atomic, we need to read the counters
      using an atomic load to avoid a race. Not worth worrying
      about when -mode=atomic is set during generation
      of the profile, so we use atomic loads always.
      
      Fixes #8630.
      
      LGTM=rsc
      R=dvyukov, rsc
      CC=golang-codereviews
      https://golang.org/cl/141800043
      d33ee0c5
    • Rob Pike's avatar
      fmt: fix allocation test · eafa4fff
      Rob Pike authored
      With new interface allocation rules, the old counts were wrong and
      so was the commentary.
      
      LGTM=rsc
      R=rsc
      CC=golang-codereviews
      https://golang.org/cl/142760044
      eafa4fff
    • Rob Pike's avatar
      strconv: fix documentation for CanBackquote. · b6571a07
      Rob Pike authored
      Space is not a control character.
      
      Fixes #8571.
      
      LGTM=iant
      R=golang-codereviews, iant
      CC=golang-codereviews
      https://golang.org/cl/137380043
      b6571a07
    • Russ Cox's avatar
      runtime: fix build failures after CL 137410043 · 8ac35be1
      Russ Cox authored
      No promise about correctness, but they do build.
      
      TBR=khr
      CC=golang-codereviews
      https://golang.org/cl/143720043
      8ac35be1
    • Russ Cox's avatar
      runtime: assume precisestack, copystack, StackCopyAlways, ScanStackByFrames · 15b76ad9
      Russ Cox authored
      Commit to stack copying for stack growth.
      
      We're carrying around a surprising amount of cruft from older schemes.
      I am confident that precise stack scans and stack copying are here to stay.
      
      Delete fallback code for when precise stack info is disabled.
      Delete fallback code for when copying stacks is disabled.
      Delete fallback code for when StackCopyAlways is disabled.
      Delete Stktop chain - there is only one stack segment now.
      Delete M.moreargp, M.moreargsize, M.moreframesize, M.cret.
      Delete G.writenbuf (unrelated, just dead).
      Delete runtime.lessstack, runtime.oldstack.
      Delete many amd64 morestack variants.
      Delete initialization of morestack frame/arg sizes (shortens split prologue!).
      
      Replace G's stackguard/stackbase/stack0/stacksize/
      syscallstack/syscallguard/forkstackguard with simple stack
      bounds (lo, hi).
      
      Update liblink, runtime/cgo for adjustments to G.
      
      LGTM=khr
      R=khr, bradfitz
      CC=golang-codereviews, iant, r
      https://golang.org/cl/137410043
      15b76ad9