- 12 Jan, 2015 1 commit
-
-
Shenghou Ma authored
Fixes #9432 Change-Id: I08c92481afa7c7fac890aa780efc1cb2fabad528 Reviewed-on: https://go-review.googlesource.com/2115Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
- 11 Jan, 2015 2 commits
-
-
Daniel Morsing authored
Renaming the function broke the race detector since it looked for the name, didn't find it anymore and didn't insert the necessary instrumentation. Change-Id: I11fed6e807cc35be5724d26af12ceff33ebf4f7b Reviewed-on: https://go-review.googlesource.com/2661Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Dave Cheney authored
Change-Id: Ibe3ba6426cc6e683ff3712faf6119922d0f88b5a Reviewed-on: https://go-review.googlesource.com/2680Reviewed-by: Minux Ma <minux@golang.org>
-
- 10 Jan, 2015 1 commit
-
-
David du Colombier authored
Update #9554 Change-Id: I7de2a7d585d56b84ab975565042ed997e6124e08 Reviewed-on: https://go-review.googlesource.com/2613Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 09 Jan, 2015 8 commits
-
-
Shenghou Ma authored
Also fix one unaligned stack size for nacl that is caught by this change. Fixes #9539. Change-Id: Ib696a573d3f1f9bac7724f3a719aab65a11e04d3 Reviewed-on: https://go-review.googlesource.com/2600Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
CL 2520 omitted to set the type for an OCONVNOP node. Typechecking obviously cannot do it for us. 5g inserts float64 <--> [u]int64 conversions at walk time. The missing type caused it to crash. Change-Id: Idce381f219bfef2e3a3be38d3ba3c258b71310ae Reviewed-on: https://go-review.googlesource.com/2640Reviewed-by: Keith Randall <khr@golang.org>
-
Josh Bleecher Snyder authored
Recognize loops of the form for i := range a { a[i] = zero } in which the evaluation of a is free from side effects. Replace these loops with calls to memclr. This occurs in the stdlib in 18 places. The motivating example is clearing a byte slice: benchmark old ns/op new ns/op delta BenchmarkGoMemclr5 3.31 3.26 -1.51% BenchmarkGoMemclr16 13.7 3.28 -76.06% BenchmarkGoMemclr64 50.8 4.14 -91.85% BenchmarkGoMemclr256 157 6.02 -96.17% Update #5373. Change-Id: I99d3e6f5f268e8c6499b7e661df46403e5eb83e4 Reviewed-on: https://go-review.googlesource.com/2520Reviewed-by: Keith Randall <khr@golang.org>
-
Ian Lance Taylor authored
Change-Id: Icecfe9223d8457de067391fffa9f0fcee4292be7 Reviewed-on: https://go-review.googlesource.com/2620Reviewed-by: David Crawshaw <crawshaw@golang.org>
-
Peter Waller authored
If an inbound connection is closed, cancel the outbound http request. This is particularly useful if the outbound request may consume resources unnecessarily until it is cancelled. Fixes #8406 Change-Id: I738c4489186ce342f7e21d0ea3f529722c5b443a Signed-off-by: Peter Waller <p@pwaller.net> Reviewed-on: https://go-review.googlesource.com/2320Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Shenghou Ma authored
Fixes #9541. Change-Id: I5d659ad50d7c3d1c92ed9feb86cda4c1a6e62054 Reviewed-on: https://go-review.googlesource.com/2584Reviewed-by: Dave Cheney <dave@cheney.net> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Martin Möhrmann authored
Reduce buffer to maximally needed size for conversion of 64bit integers. Reduce number of used integer divisions. benchmark old ns/op new ns/op delta BenchmarkItoa 144 119 -17.36% BenchmarkPrintln 783 752 -3.96% Change-Id: I6d57a7feebf90f303be5952767107302eccf4631 Reviewed-on: https://go-review.googlesource.com/2215Reviewed-by: Rob Pike <r@golang.org>
-
Keith Randall authored
Random is bad, it can block and prevent binaries from starting. Use urandom instead. We'd rather have bad random bits than no random bits. Change-Id: I360e1cb90ace5518a1b51708822a1dae27071ebd Reviewed-on: https://go-review.googlesource.com/2582Reviewed-by: Dave Cheney <dave@cheney.net> Reviewed-by: Minux Ma <minux@golang.org>
-
- 08 Jan, 2015 17 commits
-
-
Shenghou Ma authored
This is a replay of CL 189760043 that is in release-branch.go1.4, but not in master branch somehow. Change-Id: I11eb40a24273e7be397e092ef040e54efb8ffe86 Reviewed-on: https://go-review.googlesource.com/2541Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Keith Randall authored
In 32-bit worlds, 8-byte objects are only aligned to 4-byte boundaries. Change-Id: I91469a9a67b1ee31dd508a4e105c39c815ecde58 Reviewed-on: https://go-review.googlesource.com/2581Reviewed-by: Keith Randall <khr@golang.org>
-
Robert Griesemer authored
Change-Id: I2b40cd544dda550ac6ac6da19ba3867ec30b2774 Reviewed-on: https://go-review.googlesource.com/2563Reviewed-by: Robert Griesemer <gri@golang.org>
-
Keith Randall authored
For a non-zero-sized struct with a final zero-sized field, add a byte to the size (before rounding to alignment). This change ensures that taking the address of the zero-sized field will not incorrectly leak the following object in memory. reflect.funcLayout also needs this treatment. Fixes #9401 Change-Id: I1dc503dc5af4ca22c8f8c048fb7b4541cc957e0f Reviewed-on: https://go-review.googlesource.com/2452Reviewed-by: Russ Cox <rsc@golang.org>
-
Robert Griesemer authored
(analog to Change-Id: Ia473e9ab9c63a955c252426684176bca566645ae) Fixes #9243. benchmark old ns/op new ns/op delta BenchmarkAddVV_1 5.76 5.60 -2.78% BenchmarkAddVV_2 7.17 6.98 -2.65% BenchmarkAddVV_3 8.69 8.57 -1.38% BenchmarkAddVV_4 10.5 10.5 +0.00% BenchmarkAddVV_5 13.3 11.6 -12.78% BenchmarkAddVV_1e1 20.4 19.3 -5.39% BenchmarkAddVV_1e2 166 140 -15.66% BenchmarkAddVV_1e3 1588 1278 -19.52% BenchmarkAddVV_1e4 16138 12657 -21.57% BenchmarkAddVV_1e5 167608 127836 -23.73% BenchmarkAddVW_1 4.87 4.76 -2.26% BenchmarkAddVW_2 6.10 6.07 -0.49% BenchmarkAddVW_3 7.75 7.65 -1.29% BenchmarkAddVW_4 9.30 9.39 +0.97% BenchmarkAddVW_5 10.8 10.9 +0.93% BenchmarkAddVW_1e1 18.8 18.8 +0.00% BenchmarkAddVW_1e2 143 134 -6.29% BenchmarkAddVW_1e3 1390 1266 -8.92% BenchmarkAddVW_1e4 13877 12545 -9.60% BenchmarkAddVW_1e5 155330 125432 -19.25% benchmark old MB/s new MB/s speedup BenchmarkAddVV_1 5556.09 5715.12 1.03x BenchmarkAddVV_2 8926.55 9170.64 1.03x BenchmarkAddVV_3 11042.15 11201.77 1.01x BenchmarkAddVV_4 12168.21 12245.50 1.01x BenchmarkAddVV_5 12041.39 13805.73 1.15x BenchmarkAddVV_1e1 15659.65 16548.18 1.06x BenchmarkAddVV_1e2 19268.57 22728.64 1.18x BenchmarkAddVV_1e3 20141.45 25033.36 1.24x BenchmarkAddVV_1e4 19827.86 25281.92 1.28x BenchmarkAddVV_1e5 19092.06 25031.92 1.31x BenchmarkAddVW_1 822.12 840.92 1.02x BenchmarkAddVW_2 1310.89 1317.89 1.01x BenchmarkAddVW_3 1549.31 1568.26 1.01x BenchmarkAddVW_4 1720.45 1703.77 0.99x BenchmarkAddVW_5 1857.12 1828.66 0.98x BenchmarkAddVW_1e1 2126.39 2132.38 1.00x BenchmarkAddVW_1e2 2784.49 2969.21 1.07x BenchmarkAddVW_1e3 2876.89 3157.35 1.10x BenchmarkAddVW_1e4 2882.32 3188.51 1.11x BenchmarkAddVW_1e5 2575.16 3188.96 1.24x (measured on OS X 10.9.5, 2.3 GHz Intel Core i7, 8GB 1333 MHz DDR3) Change-Id: I46698729d5e0bc3e277aa0146a9d7a086c0c26f1 Reviewed-on: https://go-review.googlesource.com/2560Reviewed-by: Keith Randall <khr@golang.org>
-
Martin Möhrmann authored
Add compile time constants for bases 10 and 16 instead of computing the cutoff value on every invocation of ParseUint by a division. Reduce usage of slice operations. amd64: benchmark old ns/op new ns/op delta BenchmarkAtoi 44.6 36.0 -19.28% BenchmarkAtoiNeg 44.2 38.9 -11.99% BenchmarkAtoi64 72.5 56.7 -21.79% BenchmarkAtoi64Neg 66.1 58.6 -11.35% 386: benchmark old ns/op new ns/op delta BenchmarkAtoi 86.6 73.0 -15.70% BenchmarkAtoiNeg 86.6 72.3 -16.51% BenchmarkAtoi64 126 108 -14.29% BenchmarkAtoi64Neg 126 108 -14.29% Change-Id: I0a271132120d776c97bb4ed1099793c73e159893 Reviewed-on: https://go-review.googlesource.com/2460Reviewed-by: Robert Griesemer <gri@golang.org>
-
Rick Hudson authored
run GC in its own background goroutine making the caller runnable if resources are available. This is critical in single goroutine applications. Allow goroutines that allocate a lot to help out the GC and in doing so throttle their own allocation. Adjust test so that it only detects that a GC is run during init calls and not whether the GC is memory efficient. Memory efficiency work will happen later in 1.5. Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b Reviewed-on: https://go-review.googlesource.com/2349Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Robert Griesemer authored
Change-Id: Ie31f957f6b60b0a9405147c7a0af789df01a4b02 Reviewed-on: https://go-review.googlesource.com/2550Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Austin Clements authored
This improves the printing of GC times to be both more human-friendly and to provide enough information for the construction of MMU curves and other statistics. The new times look like: GC: #8 72413852ns @143036695895725 pause=622900 maxpause=427037 goroutines=11 gomaxprocs=4 GC: sweep term: 190584ns max=190584 total=275001 procs=4 GC: scan: 260397ns max=260397 total=902666 procs=1 GC: install wb: 5279ns max=5279 total=18642 procs=4 GC: mark: 71530555ns max=71530555 total=186694660 procs=1 GC: mark term: 427037ns max=427037 total=1691184 procs=4 This prints gomaxprocs and the number of procs used in each phase for the benefit of analyzing mutator utilization during concurrent phases. This also means the analysis doesn't have to hard-code which phases are STW. This prints the absolute start time only for the GC cycle. The other start times can be derived from the phase durations. This declutters the view for humans readers and doesn't pose any additional complexity for machine readers. This removes the confusing "cycle" terminology. Instead, this places the phase duration after the phase name and adds a "ns" unit, which both makes it implicitly clear that this is the duration of that phase and indicates the units of the times. This adds a "GC:" prefix to all lines for easier identification. Finally, this generally cleans up the code as well as the placement of spaces in the output and adds print locking so the statistics blocks are never interrupted by other prints. Change-Id: Ifd056db83ed1b888de7dfa9a8fc5732b01ccc631 Reviewed-on: https://go-review.googlesource.com/2542Reviewed-by: Rick Hudson <rlh@golang.org>
-
Robert Griesemer authored
(platforms w/o corresponding assembly kernels) For short vector adds there's some erradic slow-down, but overall these routines have become significantly faster. This only matters for platforms w/o native (assembly) versions of these kernels, so we are not concerned about the minor slow-down for short vectors. This code was already reviewed under Mercurial (golang.org/cl/172810043) but wasn't submitted before the switch to git. Benchmarks run on 2.3GHz Intel Core i7, running OS X 10.9.5, with the respective AddVV and AddVW assembly routines disabled. benchmark old ns/op new ns/op delta BenchmarkAddVV_1 6.59 7.09 +7.59% BenchmarkAddVV_2 10.3 10.1 -1.94% BenchmarkAddVV_3 10.9 12.6 +15.60% BenchmarkAddVV_4 13.9 15.6 +12.23% BenchmarkAddVV_5 16.8 17.3 +2.98% BenchmarkAddVV_1e1 29.5 29.9 +1.36% BenchmarkAddVV_1e2 246 232 -5.69% BenchmarkAddVV_1e3 2374 2185 -7.96% BenchmarkAddVV_1e4 58942 22292 -62.18% BenchmarkAddVV_1e5 668622 225279 -66.31% BenchmarkAddVW_1 6.81 5.58 -18.06% BenchmarkAddVW_2 7.69 6.86 -10.79% BenchmarkAddVW_3 9.56 8.32 -12.97% BenchmarkAddVW_4 12.1 9.53 -21.24% BenchmarkAddVW_5 13.2 10.9 -17.42% BenchmarkAddVW_1e1 23.4 18.0 -23.08% BenchmarkAddVW_1e2 175 141 -19.43% BenchmarkAddVW_1e3 1568 1266 -19.26% BenchmarkAddVW_1e4 15425 12596 -18.34% BenchmarkAddVW_1e5 156737 133539 -14.80% BenchmarkFibo 381678466 132958666 -65.16% benchmark old MB/s new MB/s speedup BenchmarkAddVV_1 9715.25 9028.30 0.93x BenchmarkAddVV_2 12461.72 12622.60 1.01x BenchmarkAddVV_3 17549.64 15243.82 0.87x BenchmarkAddVV_4 18392.54 16398.29 0.89x BenchmarkAddVV_5 18995.23 18496.57 0.97x BenchmarkAddVV_1e1 21708.98 21438.28 0.99x BenchmarkAddVV_1e2 25956.53 27506.88 1.06x BenchmarkAddVV_1e3 26947.93 29286.66 1.09x BenchmarkAddVV_1e4 10857.96 28709.46 2.64x BenchmarkAddVV_1e5 9571.91 28409.21 2.97x BenchmarkAddVW_1 1175.28 1433.98 1.22x BenchmarkAddVW_2 2080.01 2332.54 1.12x BenchmarkAddVW_3 2509.28 2883.97 1.15x BenchmarkAddVW_4 2646.09 3356.83 1.27x BenchmarkAddVW_5 3020.69 3671.07 1.22x BenchmarkAddVW_1e1 3425.76 4441.40 1.30x BenchmarkAddVW_1e2 4553.17 5642.96 1.24x BenchmarkAddVW_1e3 5100.14 6318.72 1.24x BenchmarkAddVW_1e4 5186.15 6350.96 1.22x BenchmarkAddVW_1e5 5104.07 5990.74 1.17x Change-Id: I7a62023b1105248a0e85e5b9819d3fd4266123d4 Reviewed-on: https://go-review.googlesource.com/2480Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Alan Donovan <adonovan@google.com>
-
Robert Griesemer authored
Replaced use of rotate instructions (RCRQ, RCLQ) with ADDQ/SBBQ for restoring/saving the carry flag per suggestion from Torbjörn Granlund (author of GMP bignum libs for C). The rotate instructions tend to be slower on todays machines. benchmark old ns/op new ns/op delta BenchmarkAddVV_1 5.69 5.51 -3.16% BenchmarkAddVV_2 7.15 6.87 -3.92% BenchmarkAddVV_3 8.69 8.06 -7.25% BenchmarkAddVV_4 8.10 8.13 +0.37% BenchmarkAddVV_5 8.37 8.47 +1.19% BenchmarkAddVV_1e1 13.1 12.0 -8.40% BenchmarkAddVV_1e2 78.1 69.4 -11.14% BenchmarkAddVV_1e3 815 656 -19.51% BenchmarkAddVV_1e4 8137 7345 -9.73% BenchmarkAddVV_1e5 100127 93909 -6.21% BenchmarkAddVW_1 4.86 4.71 -3.09% BenchmarkAddVW_2 5.67 5.50 -3.00% BenchmarkAddVW_3 6.51 6.34 -2.61% BenchmarkAddVW_4 6.69 6.66 -0.45% BenchmarkAddVW_5 7.20 7.21 +0.14% BenchmarkAddVW_1e1 10.0 9.34 -6.60% BenchmarkAddVW_1e2 45.4 52.3 +15.20% BenchmarkAddVW_1e3 417 491 +17.75% BenchmarkAddVW_1e4 4760 4852 +1.93% BenchmarkAddVW_1e5 69107 67717 -2.01% benchmark old MB/s new MB/s speedup BenchmarkAddVV_1 11241.82 11610.28 1.03x BenchmarkAddVV_2 17902.68 18631.82 1.04x BenchmarkAddVV_3 22082.43 23835.64 1.08x BenchmarkAddVV_4 31588.18 31492.06 1.00x BenchmarkAddVV_5 38229.90 37783.17 0.99x BenchmarkAddVV_1e1 48891.67 53340.91 1.09x BenchmarkAddVV_1e2 81940.61 92191.86 1.13x BenchmarkAddVV_1e3 78443.09 97480.44 1.24x BenchmarkAddVV_1e4 78644.18 87129.50 1.11x BenchmarkAddVV_1e5 63918.48 68150.84 1.07x BenchmarkAddVW_1 13165.09 13581.00 1.03x BenchmarkAddVW_2 22588.04 23275.41 1.03x BenchmarkAddVW_3 29483.82 30303.96 1.03x BenchmarkAddVW_4 38286.54 38453.21 1.00x BenchmarkAddVW_5 44414.57 44370.59 1.00x BenchmarkAddVW_1e1 63816.84 68494.08 1.07x BenchmarkAddVW_1e2 140885.41 122427.16 0.87x BenchmarkAddVW_1e3 153258.31 130325.28 0.85x BenchmarkAddVW_1e4 134447.63 131904.02 0.98x BenchmarkAddVW_1e5 92609.41 94509.88 1.02x Change-Id: Ia473e9ab9c63a955c252426684176bca566645ae Reviewed-on: https://go-review.googlesource.com/2503Reviewed-by: Keith Randall <khr@golang.org>
-
Martin Möhrmann authored
Edge cases like base 2 and 36 conversions are now covered. Many tests are mirrored from the itoa tests. Added more test cases for syntax errors. Change-Id: Iad8b2fb4854f898c2bfa18cdeb0cb4a758fcfc2e Reviewed-on: https://go-review.googlesource.com/2463Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Alex Brainman authored
I would like to create new syscalls in src/internal/syscall, and I prefer not to add new shell scripts for that. Replacement for CL 136000043. Change-Id: I840116b5914a2324f516cdb8603c78973d28aeb4 Reviewed-on: https://go-review.googlesource.com/1940Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
This test was taking a long time, reduce its zealousness. Change-Id: Ib824247b84b0039a9ec690f72336bef3738d4c44 Reviewed-on: https://go-review.googlesource.com/2502Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Minux Ma <minux@golang.org>
-
Brad Fitzpatrick authored
$GOTESTONLY controls which set of tests gets run. Only "std" is supported. This should bring the time of plan9 builder down from 90 minutes to a maybe 10-15 minutes when running on GCE. (Plan 9 has performance problems when running on GCE, and/or with the os/exec package) This is a temporary workaround for one builder. The other Plan 9 builders will continue to do full builds. The plan9 buidler will be renamed plan9-386-gcepartial or something to indicate it's not running the 'test/*' directory, or API tests. Go on Plan 9 has bigger problems for now. This lets us get trybots going sooner including Plan 9, without waiting 90+ minutes. Update #9491 Change-Id: Ic505e9169c6b304ed4029b7bdfb77bb5c8fa8daa Reviewed-on: https://go-review.googlesource.com/2522Reviewed-by: Rob Pike <r@golang.org>
-
Brad Fitzpatrick authored
This isn't the final answer, but it will give us a clue about what's going on. Update #9491 Change-Id: I997f6004eb97e86a4a89a8caabaf58cfdf92a8f0 Reviewed-on: https://go-review.googlesource.com/2510Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Removing #cgo directive parsing from cmd/cgo was done in https://golang.org/cl/8610044. Change-Id: Id1bec58c6ec1f932df0ce0ee84ff253655bb73ff Reviewed-on: https://go-review.googlesource.com/2501Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 07 Jan, 2015 11 commits
-
-
Shenghou Ma authored
SWIG has always returned a typed interface value for a C++ class, so the interface value will never be nil even if the pointer itself is NULL. ptr == NULL in C/C++ should be ptr.Swigcptr() == 0 in Go. Fixes #9514. Change-Id: I3778b91acf54d2ff22d7427fbf2b6ec9b9ce3b43 Reviewed-on: https://go-review.googlesource.com/2440Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
David du Colombier authored
Increasing the timeout prevents the runtime test to time out on the Plan 9 instances running on GCE. Update golang/go#9491 Change-Id: Id9c2b0c4e59b103608565168655799b353afcd77 Reviewed-on: https://go-review.googlesource.com/2462Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Now that there's no 6c compiler anymore, there's no need for cgo to generate C headers that are compatible with it. Fixes #9528 Change-Id: I43f53869719eb9a6065f1b39f66f060e604cbee0 Reviewed-on: https://go-review.googlesource.com/2482Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I4dc97ff8111bdc5ca6e4e3af06aaf4f768031c68 Reviewed-on: https://go-review.googlesource.com/2473Reviewed-by: Minux Ma <minux@golang.org>
-
Josh Bleecher Snyder authored
The compiler converts 'val, ok = m[key]' to tmp, ok = <runtime call> val = *tmp For lookups of the form '_, ok = m[key]', the second statement is unnecessary. By not generating it we save a nil check. Change-Id: I21346cc195cb3c62e041af8b18770c0940358695 Reviewed-on: https://go-review.googlesource.com/1975Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
* Enable basic constant propagation for floats. The constant propagation is still not as aggressive as it could be. * Implement MOVSS $(0), Xx and MOVSD $(0), Xx as XORPS Xx, Xx. Sample code: func f32() float32 { var f float32 return f } func f64() float64 { var f float64 return f } Before: "".f32 t=1 size=32 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:3) TEXT "".f32+0(SB),4,$0-8 0x0000 00000 (demo.go:3) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:3) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:3) MOVSS $f32.00000000+0(SB),X0 0x0008 00008 (demo.go:4) MOVSS $f32.00000000+0(SB),X0 0x0010 00016 (demo.go:5) MOVSS X0,"".~r0+8(FP) 0x0016 00022 (demo.go:5) RET , "".f64 t=1 size=32 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:8) TEXT "".f64+0(SB),4,$0-8 0x0000 00000 (demo.go:8) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:8) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:8) MOVSD $f64.0000000000000000+0(SB),X0 0x0008 00008 (demo.go:9) MOVSD $f64.0000000000000000+0(SB),X0 0x0010 00016 (demo.go:10) MOVSD X0,"".~r0+8(FP) 0x0016 00022 (demo.go:10) RET , After: "".f32 t=1 size=16 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:3) TEXT "".f32+0(SB),4,$0-8 0x0000 00000 (demo.go:3) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:3) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:3) XORPS X0,X0 0x0003 00003 (demo.go:5) MOVSS X0,"".~r0+8(FP) 0x0009 00009 (demo.go:5) RET , "".f64 t=1 size=16 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:8) TEXT "".f64+0(SB),4,$0-8 0x0000 00000 (demo.go:8) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:8) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:8) XORPS X0,X0 0x0003 00003 (demo.go:10) MOVSD X0,"".~r0+8(FP) 0x0009 00009 (demo.go:10) RET , Change-Id: Ie9eb65e324af4f664153d0a7cd22bb16b0fba16d Reviewed-on: https://go-review.googlesource.com/2053Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
The equal algorithm used to take the size equal(p, q *T, size uintptr) bool With this change, it does not equal(p, q *T) bool Similarly for the hash algorithm. The size is rarely used, as most equal functions know the size of the thing they are comparing. For instance f32equal already knows its inputs are 4 bytes in size. For cases where the size is not known, we allocate a closure (one for each size needed) that points to an assembly stub that reads the size out of the closure and calls generic code that has a size argument. Reduces the size of the go binary by 0.07%. Performance impact is not measurable. Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec Reviewed-on: https://go-review.googlesource.com/2392Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
It is unused as of e7173dfd. Change-Id: I3e4ea3fc66cf0a768ff28172a151b244952eefc9 Reviewed-on: https://go-review.googlesource.com/2093Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Keith Randall authored
Use a lookup table to find the function which contains a pc. It is faster than the old binary search. findfunc is used primarily for stack copying and garbage collection. benchmark old ns/op new ns/op delta BenchmarkStackCopy 294746596 255400980 -13.35% (findfunc is one of several tasks done by stack copy, the findfunc time itself is about 2.5x faster.) The lookup table is built at link time. The table grows the binary size by about 0.5% of the text segment. We impose a lower limit of 16 bytes on any function, which should not have much of an impact. (The real constraint required is <=256 functions in every 4096 bytes, but 16 bytes/function is easier to implement.) Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca Reviewed-on: https://go-review.googlesource.com/2097Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This implements support for calls to and from C in the ppc64 C ABI, as well as supporting functionality such as an entry point from the dynamic linker. Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a Reviewed-on: https://go-review.googlesource.com/2009Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Cgo will need this for calls from C to Go and for handling signals that may occur in C code. Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada Reviewed-on: https://go-review.googlesource.com/2008Reviewed-by: Russ Cox <rsc@golang.org>
-