1. 29 Jan, 2015 14 commits
    • Robert Griesemer's avatar
      test/closure2.go: correctly "use" tmp · 4a175682
      Robert Griesemer authored
      cmd/go doesn't complain (this is an open issue), but go/types does
      
      Change-Id: I2caec1f7aec991a9500d2c3504c29e4ab718c138
      Reviewed-on: https://go-review.googlesource.com/3541Reviewed-by: 's avatarAlan Donovan <adonovan@google.com>
      4a175682
    • Rick Hudson's avatar
      runtime: scanvalid race Fixes #9727 · 27aed3ce
      Rick Hudson authored
      Set gcscanvalid=false after you have cased to _Grunning.
      If you do it before the cas and the atomicstatus races to a scan state,
      the scan will set gcscanvalid=true and we will be _Grunning
      with gcscanvalid==true which is not a good thing.
      
      Change-Id: Ie53ea744a5600392b47da91159d985fe6fe75961
      Reviewed-on: https://go-review.googlesource.com/3510Reviewed-by: 's avatarAustin Clements <austin@google.com>
      27aed3ce
    • Austin Clements's avatar
      runtime: use func value for parfor body · 428afae0
      Austin Clements authored
      Yet another leftover from C: parfor took a func value for the
      callback, casted it to an unsafe.Pointer for storage, and then casted
      it back to a func value to call it.  This is unnecessary, so just
      store the body as a func value.  Beyond general cleanup, this also
      eliminates the last use of unsafe in parfor.
      
      Change-Id: Ia904af7c6c443ba75e2699835aee8e9a39b26dd8
      Reviewed-on: https://go-review.googlesource.com/3396Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      428afae0
    • Austin Clements's avatar
      runtime: eliminate parfor ctx field · ebbdf2a1
      Austin Clements authored
      Prior to the conversion of the runtime to Go, this void* was
      necessary to get closure information in to C callbacks.  There
      are no more C callbacks and parfor is perfectly capable of
      invoking a Go closure now, so eliminate ctx and all of its
      unsafe-ness.  (Plus, the runtime currently doesn't use ctx for
      anything.)
      
      Change-Id: I39fc53b7dd3d7f660710abc76b0d831bfc6296d8
      Reviewed-on: https://go-review.googlesource.com/3395Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      ebbdf2a1
    • Austin Clements's avatar
      runtime: use threads slice in parfor instead of unsafe pointer math · 8e2bb7bb
      Austin Clements authored
      parfor originally used a tail array for its thread array.  This got
      replaced with a slice allocation in the conversion to Go, but many of
      its gnarlier effects remained.  Instead of keeping track of the
      pointer to the first element of the slice and using unsafe pointer
      math to get at the ith element, just keep the slice around and use
      regular slice indexing.  There is no longer any need for padding to
      64-bit align the tail array (there hasn't been since the Go
      conversion), so remove this unnecessary padding from the parfor
      struct.  Finally, since the slice tracks its own length, replace the
      nthrmax field with len(thr).
      
      Change-Id: I0020a1815849bca53e3613a8fa46ae4fbae67576
      Reviewed-on: https://go-review.googlesource.com/3394Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      8e2bb7bb
    • Austin Clements's avatar
      runtime: move all parfor-related code to parfor.go · 6b7b0f9a
      Austin Clements authored
      This cleanup was slated for after the conversion of the runtime to Go.
      Also improve type and function documentation.
      
      Change-Id: I55a16b09e00cf701f246deb69e7ce7e3e04b26e7
      Reviewed-on: https://go-review.googlesource.com/3393Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      6b7b0f9a
    • Austin Clements's avatar
      runtime: check alignment of 8-byte atomic loads and stores on 386 · 7a71726b
      Austin Clements authored
      Currently, if we do an atomic{load,store}64 of an unaligned address on
      386, we'll simply get a non-atomic load/store.  This has been the
      source of myriad bugs, so add alignment checks to these two
      operations.  These checks parallel the equivalent checks in
      sync/atomic.
      
      The alignment check is not necessary in cas64 because it uses a locked
      instruction.  The CPU will either execute this atomically or raise an
      alignment fault (#AC)---depending on the alignment check flag---either
      of which is fine.
      
      This also fixes the two places in the runtime that trip the new
      checks.  One is in the runtime self-test and shouldn't have caused
      real problems.  The other is in tickspersecond and could, in
      principle, have caused a misread of the ticks per second during
      initialization.
      
      Change-Id: If1796667012a6154f64f5e71d043c7f5fb3dd050
      Reviewed-on: https://go-review.googlesource.com/3521Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      7a71726b
    • Russ Cox's avatar
      cmd/go: add build flag -toolexec · 83c10b20
      Russ Cox authored
      Like the -exec flag, which specifies a program to use to run a built executable,
      the -toolexec flag specifies a program to use to run a tool like 5a, 5g, or 5l.
      
      This flag enables running the toolchain under common testing environments,
      such as valgrind.
      
      This flag also enables the use of custom testing environments or the substitution
      of alternate tools. See https://godoc.org/rsc.io/toolstash for one possibility.
      
      Change-Id: I256aa7af2d96a4bc7911dc58151cc2155dbd4121
      Reviewed-on: https://go-review.googlesource.com/3351Reviewed-by: 's avatarRob Pike <r@golang.org>
      83c10b20
    • Dmitry Vyukov's avatar
      cmd/gc: capture variables by value · 0e80b2e0
      Dmitry Vyukov authored
      Language specification says that variables are captured by reference.
      And that is what gc compiler does. However, in lots of cases it is
      possible to capture variables by value under the hood without
      affecting visible behavior of programs. For example, consider
      the following typical pattern:
      
      	func (o *Obj) requestMany(urls []string) []Result {
      		wg := new(sync.WaitGroup)
      		wg.Add(len(urls))
      		res := make([]Result, len(urls))
      		for i := range urls {
      			i := i
      			go func() {
      				res[i] = o.requestOne(urls[i])
      				wg.Done()
      			}()
      		}
      		wg.Wait()
      		return res
      	}
      
      Currently o, wg, res, and i are captured by reference causing 3+len(urls)
      allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap).
      But all of them can be captured by value without changing behavior.
      
      This change implements simple strategy for capturing by value:
      if a captured variable is not addrtaken and never assigned to,
      then it is captured by value (it is effectively const).
      This simple strategy turned out to be very effective:
      ~80% of all captures in std lib are turned into value captures.
      The remaining 20% are mostly in defers and non-escaping closures,
      that is, they do not cause allocations anyway.
      
      benchmark                                    old allocs     new allocs     delta
      BenchmarkCompressedZipGarbage                153            126            -17.65%
      BenchmarkEncodeDigitsSpeed1e4                91             69             -24.18%
      BenchmarkEncodeDigitsSpeed1e5                178            129            -27.53%
      BenchmarkEncodeDigitsSpeed1e6                1510           1051           -30.40%
      BenchmarkEncodeDigitsDefault1e4              100            75             -25.00%
      BenchmarkEncodeDigitsDefault1e5              193            139            -27.98%
      BenchmarkEncodeDigitsDefault1e6              1420           985            -30.63%
      BenchmarkEncodeDigitsCompress1e4             100            75             -25.00%
      BenchmarkEncodeDigitsCompress1e5             193            139            -27.98%
      BenchmarkEncodeDigitsCompress1e6             1420           985            -30.63%
      BenchmarkEncodeTwainSpeed1e4                 109            81             -25.69%
      BenchmarkEncodeTwainSpeed1e5                 211            151            -28.44%
      BenchmarkEncodeTwainSpeed1e6                 1588           1097           -30.92%
      BenchmarkEncodeTwainDefault1e4               103            77             -25.24%
      BenchmarkEncodeTwainDefault1e5               199            143            -28.14%
      BenchmarkEncodeTwainDefault1e6               1324           917            -30.74%
      BenchmarkEncodeTwainCompress1e4              103            77             -25.24%
      BenchmarkEncodeTwainCompress1e5              190            137            -27.89%
      BenchmarkEncodeTwainCompress1e6              1327           919            -30.75%
      BenchmarkConcurrentDBExec                    16223          16220          -0.02%
      BenchmarkConcurrentStmtQuery                 17687          16182          -8.51%
      BenchmarkConcurrentStmtExec                  5191           5186           -0.10%
      BenchmarkConcurrentTxQuery                   17665          17661          -0.02%
      BenchmarkConcurrentTxExec                    15154          15150          -0.03%
      BenchmarkConcurrentTxStmtQuery               17661          16157          -8.52%
      BenchmarkConcurrentTxStmtExec                3677           3673           -0.11%
      BenchmarkConcurrentRandom                    14000          13614          -2.76%
      BenchmarkManyConcurrentQueries               25             22             -12.00%
      BenchmarkDecodeComplex128Slice               318            252            -20.75%
      BenchmarkDecodeFloat64Slice                  318            252            -20.75%
      BenchmarkDecodeInt32Slice                    318            252            -20.75%
      BenchmarkDecodeStringSlice                   2318           2252           -2.85%
      BenchmarkDecode                              11             8              -27.27%
      BenchmarkEncodeGray                          64             56             -12.50%
      BenchmarkEncodeNRGBOpaque                    64             56             -12.50%
      BenchmarkEncodeNRGBA                         67             58             -13.43%
      BenchmarkEncodePaletted                      68             60             -11.76%
      BenchmarkEncodeRGBOpaque                     64             56             -12.50%
      BenchmarkGoLookupIP                          153            139            -9.15%
      BenchmarkGoLookupIPNoSuchHost                508            466            -8.27%
      BenchmarkGoLookupIPWithBrokenNameServer      245            226            -7.76%
      BenchmarkClientServer                        62             59             -4.84%
      BenchmarkClientServerParallel4               62             59             -4.84%
      BenchmarkClientServerParallel64              62             59             -4.84%
      BenchmarkClientServerParallelTLS4            79             76             -3.80%
      BenchmarkClientServerParallelTLS64           112            109            -2.68%
      BenchmarkCreateGoroutinesCapture             10             6              -40.00%
      BenchmarkAfterFunc                           1006           1005           -0.10%
      
      Fixes #6632.
      
      Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca
      Reviewed-on: https://go-review.googlesource.com/3166Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      0e80b2e0
    • Mikio Hara's avatar
      net: remove full stack test cases for IPConn · e16ed287
      Mikio Hara authored
      A few packages that handle net.IPConn in golang.org/x/net sub repository
      already implement full stack test cases with more coverage than the net
      package. There is no need to keep duplicate code around here.
      
      This change removes full stack test cases for IPConn that require
      knowing how to speak with each of protocol stack implementation of
      supported platforms.
      
      Change-Id: I871119a9746fc6a2b997b69cfd733463558f5816
      Reviewed-on: https://go-review.googlesource.com/3404Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      e16ed287
    • Mikio Hara's avatar
      net: remove solaris tag from cgo · 5e279ddd
      Mikio Hara authored
      For now solaris port does not support cgo. Moreover, its system calls
      and library interfaces are different from BSD.
      
      Change-Id: Idb4fed889973368b35d38b361b23581abacfdeab
      Reviewed-on: https://go-review.googlesource.com/3306Reviewed-by: 's avatarAram Hăvărneanu <aram@mgk.ro>
      5e279ddd
    • Alex Plugaru's avatar
      encoding/json: add UnmarshalTypeError.Offset · a257ffb1
      Alex Plugaru authored
      Fixes #9693
      
      Change-Id: Ibf07199729bfc883b2a7e051cafd98185f912acd
      Reviewed-on: https://go-review.googlesource.com/3283Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
      a257ffb1
    • Evan Phoenix's avatar
      expvar: Use sync/atomic to manipulate Int for better perf · bd043d86
      Evan Phoenix authored
      Using a mutex to protect a single int operation is quite heavyweight.
      Using sync/atomic provides much better performance. This change was
      benchmarked as such:
      
      BenchmarkSync   10000000       139 ns/op
      BenchmarkAtomic 200000000      9.90 ns/op
      
      package blah
      
      import (
              "sync"
              "sync/atomic"
              "testing"
      )
      
      type Int struct {
              mu sync.RWMutex
              i  int64
      }
      
      func (v *Int) Add(delta int64) {
              v.mu.Lock()
              defer v.mu.Unlock()
              v.i += delta
      }
      
      type AtomicInt struct {
              i int64
      }
      
      func (v *AtomicInt) Add(delta int64) {
              atomic.AddInt64(&v.i, delta)
      }
      
      func BenchmarkSync(b *testing.B) {
              s := new(Int)
      
              for i := 0; i < b.N; i++ {
                      s.Add(1)
              }
      }
      
      func BenchmarkAtomic(b *testing.B) {
              s := new(AtomicInt)
      
              for i := 0; i < b.N; i++ {
                      s.Add(1)
              }
      }
      
      Change-Id: I6998239c785967647351bbfe8533c38e4894543b
      Reviewed-on: https://go-review.googlesource.com/3430Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      bd043d86
    • Dominik Vogt's avatar
      cmd/cgo: add support for s390 and s390x · b49c3ac2
      Dominik Vogt authored
      This patch was previously sent for review using hg:
      golang.org/cl/173930043
      
      Change-Id: I559a2f2ee07990d0c23d2580381e32f8e23077a5
      Reviewed-on: https://go-review.googlesource.com/3033Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      b49c3ac2
  2. 28 Jan, 2015 25 commits
  3. 27 Jan, 2015 1 commit