- 08 Jan, 2015 11 commits
-
-
Rick Hudson authored
run GC in its own background goroutine making the caller runnable if resources are available. This is critical in single goroutine applications. Allow goroutines that allocate a lot to help out the GC and in doing so throttle their own allocation. Adjust test so that it only detects that a GC is run during init calls and not whether the GC is memory efficient. Memory efficiency work will happen later in 1.5. Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b Reviewed-on: https://go-review.googlesource.com/2349Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Robert Griesemer authored
Change-Id: Ie31f957f6b60b0a9405147c7a0af789df01a4b02 Reviewed-on: https://go-review.googlesource.com/2550Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Austin Clements authored
This improves the printing of GC times to be both more human-friendly and to provide enough information for the construction of MMU curves and other statistics. The new times look like: GC: #8 72413852ns @143036695895725 pause=622900 maxpause=427037 goroutines=11 gomaxprocs=4 GC: sweep term: 190584ns max=190584 total=275001 procs=4 GC: scan: 260397ns max=260397 total=902666 procs=1 GC: install wb: 5279ns max=5279 total=18642 procs=4 GC: mark: 71530555ns max=71530555 total=186694660 procs=1 GC: mark term: 427037ns max=427037 total=1691184 procs=4 This prints gomaxprocs and the number of procs used in each phase for the benefit of analyzing mutator utilization during concurrent phases. This also means the analysis doesn't have to hard-code which phases are STW. This prints the absolute start time only for the GC cycle. The other start times can be derived from the phase durations. This declutters the view for humans readers and doesn't pose any additional complexity for machine readers. This removes the confusing "cycle" terminology. Instead, this places the phase duration after the phase name and adds a "ns" unit, which both makes it implicitly clear that this is the duration of that phase and indicates the units of the times. This adds a "GC:" prefix to all lines for easier identification. Finally, this generally cleans up the code as well as the placement of spaces in the output and adds print locking so the statistics blocks are never interrupted by other prints. Change-Id: Ifd056db83ed1b888de7dfa9a8fc5732b01ccc631 Reviewed-on: https://go-review.googlesource.com/2542Reviewed-by: Rick Hudson <rlh@golang.org>
-
Robert Griesemer authored
(platforms w/o corresponding assembly kernels) For short vector adds there's some erradic slow-down, but overall these routines have become significantly faster. This only matters for platforms w/o native (assembly) versions of these kernels, so we are not concerned about the minor slow-down for short vectors. This code was already reviewed under Mercurial (golang.org/cl/172810043) but wasn't submitted before the switch to git. Benchmarks run on 2.3GHz Intel Core i7, running OS X 10.9.5, with the respective AddVV and AddVW assembly routines disabled. benchmark old ns/op new ns/op delta BenchmarkAddVV_1 6.59 7.09 +7.59% BenchmarkAddVV_2 10.3 10.1 -1.94% BenchmarkAddVV_3 10.9 12.6 +15.60% BenchmarkAddVV_4 13.9 15.6 +12.23% BenchmarkAddVV_5 16.8 17.3 +2.98% BenchmarkAddVV_1e1 29.5 29.9 +1.36% BenchmarkAddVV_1e2 246 232 -5.69% BenchmarkAddVV_1e3 2374 2185 -7.96% BenchmarkAddVV_1e4 58942 22292 -62.18% BenchmarkAddVV_1e5 668622 225279 -66.31% BenchmarkAddVW_1 6.81 5.58 -18.06% BenchmarkAddVW_2 7.69 6.86 -10.79% BenchmarkAddVW_3 9.56 8.32 -12.97% BenchmarkAddVW_4 12.1 9.53 -21.24% BenchmarkAddVW_5 13.2 10.9 -17.42% BenchmarkAddVW_1e1 23.4 18.0 -23.08% BenchmarkAddVW_1e2 175 141 -19.43% BenchmarkAddVW_1e3 1568 1266 -19.26% BenchmarkAddVW_1e4 15425 12596 -18.34% BenchmarkAddVW_1e5 156737 133539 -14.80% BenchmarkFibo 381678466 132958666 -65.16% benchmark old MB/s new MB/s speedup BenchmarkAddVV_1 9715.25 9028.30 0.93x BenchmarkAddVV_2 12461.72 12622.60 1.01x BenchmarkAddVV_3 17549.64 15243.82 0.87x BenchmarkAddVV_4 18392.54 16398.29 0.89x BenchmarkAddVV_5 18995.23 18496.57 0.97x BenchmarkAddVV_1e1 21708.98 21438.28 0.99x BenchmarkAddVV_1e2 25956.53 27506.88 1.06x BenchmarkAddVV_1e3 26947.93 29286.66 1.09x BenchmarkAddVV_1e4 10857.96 28709.46 2.64x BenchmarkAddVV_1e5 9571.91 28409.21 2.97x BenchmarkAddVW_1 1175.28 1433.98 1.22x BenchmarkAddVW_2 2080.01 2332.54 1.12x BenchmarkAddVW_3 2509.28 2883.97 1.15x BenchmarkAddVW_4 2646.09 3356.83 1.27x BenchmarkAddVW_5 3020.69 3671.07 1.22x BenchmarkAddVW_1e1 3425.76 4441.40 1.30x BenchmarkAddVW_1e2 4553.17 5642.96 1.24x BenchmarkAddVW_1e3 5100.14 6318.72 1.24x BenchmarkAddVW_1e4 5186.15 6350.96 1.22x BenchmarkAddVW_1e5 5104.07 5990.74 1.17x Change-Id: I7a62023b1105248a0e85e5b9819d3fd4266123d4 Reviewed-on: https://go-review.googlesource.com/2480Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Alan Donovan <adonovan@google.com>
-
Robert Griesemer authored
Replaced use of rotate instructions (RCRQ, RCLQ) with ADDQ/SBBQ for restoring/saving the carry flag per suggestion from Torbjörn Granlund (author of GMP bignum libs for C). The rotate instructions tend to be slower on todays machines. benchmark old ns/op new ns/op delta BenchmarkAddVV_1 5.69 5.51 -3.16% BenchmarkAddVV_2 7.15 6.87 -3.92% BenchmarkAddVV_3 8.69 8.06 -7.25% BenchmarkAddVV_4 8.10 8.13 +0.37% BenchmarkAddVV_5 8.37 8.47 +1.19% BenchmarkAddVV_1e1 13.1 12.0 -8.40% BenchmarkAddVV_1e2 78.1 69.4 -11.14% BenchmarkAddVV_1e3 815 656 -19.51% BenchmarkAddVV_1e4 8137 7345 -9.73% BenchmarkAddVV_1e5 100127 93909 -6.21% BenchmarkAddVW_1 4.86 4.71 -3.09% BenchmarkAddVW_2 5.67 5.50 -3.00% BenchmarkAddVW_3 6.51 6.34 -2.61% BenchmarkAddVW_4 6.69 6.66 -0.45% BenchmarkAddVW_5 7.20 7.21 +0.14% BenchmarkAddVW_1e1 10.0 9.34 -6.60% BenchmarkAddVW_1e2 45.4 52.3 +15.20% BenchmarkAddVW_1e3 417 491 +17.75% BenchmarkAddVW_1e4 4760 4852 +1.93% BenchmarkAddVW_1e5 69107 67717 -2.01% benchmark old MB/s new MB/s speedup BenchmarkAddVV_1 11241.82 11610.28 1.03x BenchmarkAddVV_2 17902.68 18631.82 1.04x BenchmarkAddVV_3 22082.43 23835.64 1.08x BenchmarkAddVV_4 31588.18 31492.06 1.00x BenchmarkAddVV_5 38229.90 37783.17 0.99x BenchmarkAddVV_1e1 48891.67 53340.91 1.09x BenchmarkAddVV_1e2 81940.61 92191.86 1.13x BenchmarkAddVV_1e3 78443.09 97480.44 1.24x BenchmarkAddVV_1e4 78644.18 87129.50 1.11x BenchmarkAddVV_1e5 63918.48 68150.84 1.07x BenchmarkAddVW_1 13165.09 13581.00 1.03x BenchmarkAddVW_2 22588.04 23275.41 1.03x BenchmarkAddVW_3 29483.82 30303.96 1.03x BenchmarkAddVW_4 38286.54 38453.21 1.00x BenchmarkAddVW_5 44414.57 44370.59 1.00x BenchmarkAddVW_1e1 63816.84 68494.08 1.07x BenchmarkAddVW_1e2 140885.41 122427.16 0.87x BenchmarkAddVW_1e3 153258.31 130325.28 0.85x BenchmarkAddVW_1e4 134447.63 131904.02 0.98x BenchmarkAddVW_1e5 92609.41 94509.88 1.02x Change-Id: Ia473e9ab9c63a955c252426684176bca566645ae Reviewed-on: https://go-review.googlesource.com/2503Reviewed-by: Keith Randall <khr@golang.org>
-
Martin Möhrmann authored
Edge cases like base 2 and 36 conversions are now covered. Many tests are mirrored from the itoa tests. Added more test cases for syntax errors. Change-Id: Iad8b2fb4854f898c2bfa18cdeb0cb4a758fcfc2e Reviewed-on: https://go-review.googlesource.com/2463Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Alex Brainman authored
I would like to create new syscalls in src/internal/syscall, and I prefer not to add new shell scripts for that. Replacement for CL 136000043. Change-Id: I840116b5914a2324f516cdb8603c78973d28aeb4 Reviewed-on: https://go-review.googlesource.com/1940Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
This test was taking a long time, reduce its zealousness. Change-Id: Ib824247b84b0039a9ec690f72336bef3738d4c44 Reviewed-on: https://go-review.googlesource.com/2502Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Minux Ma <minux@golang.org>
-
Brad Fitzpatrick authored
$GOTESTONLY controls which set of tests gets run. Only "std" is supported. This should bring the time of plan9 builder down from 90 minutes to a maybe 10-15 minutes when running on GCE. (Plan 9 has performance problems when running on GCE, and/or with the os/exec package) This is a temporary workaround for one builder. The other Plan 9 builders will continue to do full builds. The plan9 buidler will be renamed plan9-386-gcepartial or something to indicate it's not running the 'test/*' directory, or API tests. Go on Plan 9 has bigger problems for now. This lets us get trybots going sooner including Plan 9, without waiting 90+ minutes. Update #9491 Change-Id: Ic505e9169c6b304ed4029b7bdfb77bb5c8fa8daa Reviewed-on: https://go-review.googlesource.com/2522Reviewed-by: Rob Pike <r@golang.org>
-
Brad Fitzpatrick authored
This isn't the final answer, but it will give us a clue about what's going on. Update #9491 Change-Id: I997f6004eb97e86a4a89a8caabaf58cfdf92a8f0 Reviewed-on: https://go-review.googlesource.com/2510Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Removing #cgo directive parsing from cmd/cgo was done in https://golang.org/cl/8610044. Change-Id: Id1bec58c6ec1f932df0ce0ee84ff253655bb73ff Reviewed-on: https://go-review.googlesource.com/2501Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 07 Jan, 2015 25 commits
-
-
Shenghou Ma authored
SWIG has always returned a typed interface value for a C++ class, so the interface value will never be nil even if the pointer itself is NULL. ptr == NULL in C/C++ should be ptr.Swigcptr() == 0 in Go. Fixes #9514. Change-Id: I3778b91acf54d2ff22d7427fbf2b6ec9b9ce3b43 Reviewed-on: https://go-review.googlesource.com/2440Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
David du Colombier authored
Increasing the timeout prevents the runtime test to time out on the Plan 9 instances running on GCE. Update golang/go#9491 Change-Id: Id9c2b0c4e59b103608565168655799b353afcd77 Reviewed-on: https://go-review.googlesource.com/2462Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Now that there's no 6c compiler anymore, there's no need for cgo to generate C headers that are compatible with it. Fixes #9528 Change-Id: I43f53869719eb9a6065f1b39f66f060e604cbee0 Reviewed-on: https://go-review.googlesource.com/2482Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I4dc97ff8111bdc5ca6e4e3af06aaf4f768031c68 Reviewed-on: https://go-review.googlesource.com/2473Reviewed-by: Minux Ma <minux@golang.org>
-
Josh Bleecher Snyder authored
The compiler converts 'val, ok = m[key]' to tmp, ok = <runtime call> val = *tmp For lookups of the form '_, ok = m[key]', the second statement is unnecessary. By not generating it we save a nil check. Change-Id: I21346cc195cb3c62e041af8b18770c0940358695 Reviewed-on: https://go-review.googlesource.com/1975Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
* Enable basic constant propagation for floats. The constant propagation is still not as aggressive as it could be. * Implement MOVSS $(0), Xx and MOVSD $(0), Xx as XORPS Xx, Xx. Sample code: func f32() float32 { var f float32 return f } func f64() float64 { var f float64 return f } Before: "".f32 t=1 size=32 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:3) TEXT "".f32+0(SB),4,$0-8 0x0000 00000 (demo.go:3) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:3) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:3) MOVSS $f32.00000000+0(SB),X0 0x0008 00008 (demo.go:4) MOVSS $f32.00000000+0(SB),X0 0x0010 00016 (demo.go:5) MOVSS X0,"".~r0+8(FP) 0x0016 00022 (demo.go:5) RET , "".f64 t=1 size=32 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:8) TEXT "".f64+0(SB),4,$0-8 0x0000 00000 (demo.go:8) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:8) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:8) MOVSD $f64.0000000000000000+0(SB),X0 0x0008 00008 (demo.go:9) MOVSD $f64.0000000000000000+0(SB),X0 0x0010 00016 (demo.go:10) MOVSD X0,"".~r0+8(FP) 0x0016 00022 (demo.go:10) RET , After: "".f32 t=1 size=16 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:3) TEXT "".f32+0(SB),4,$0-8 0x0000 00000 (demo.go:3) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:3) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:3) XORPS X0,X0 0x0003 00003 (demo.go:5) MOVSS X0,"".~r0+8(FP) 0x0009 00009 (demo.go:5) RET , "".f64 t=1 size=16 value=0 args=0x8 locals=0x0 0x0000 00000 (demo.go:8) TEXT "".f64+0(SB),4,$0-8 0x0000 00000 (demo.go:8) FUNCDATA $0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB) 0x0000 00000 (demo.go:8) FUNCDATA $1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB) 0x0000 00000 (demo.go:8) XORPS X0,X0 0x0003 00003 (demo.go:10) MOVSD X0,"".~r0+8(FP) 0x0009 00009 (demo.go:10) RET , Change-Id: Ie9eb65e324af4f664153d0a7cd22bb16b0fba16d Reviewed-on: https://go-review.googlesource.com/2053Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
The equal algorithm used to take the size equal(p, q *T, size uintptr) bool With this change, it does not equal(p, q *T) bool Similarly for the hash algorithm. The size is rarely used, as most equal functions know the size of the thing they are comparing. For instance f32equal already knows its inputs are 4 bytes in size. For cases where the size is not known, we allocate a closure (one for each size needed) that points to an assembly stub that reads the size out of the closure and calls generic code that has a size argument. Reduces the size of the go binary by 0.07%. Performance impact is not measurable. Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec Reviewed-on: https://go-review.googlesource.com/2392Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
It is unused as of e7173dfd. Change-Id: I3e4ea3fc66cf0a768ff28172a151b244952eefc9 Reviewed-on: https://go-review.googlesource.com/2093Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Keith Randall authored
Use a lookup table to find the function which contains a pc. It is faster than the old binary search. findfunc is used primarily for stack copying and garbage collection. benchmark old ns/op new ns/op delta BenchmarkStackCopy 294746596 255400980 -13.35% (findfunc is one of several tasks done by stack copy, the findfunc time itself is about 2.5x faster.) The lookup table is built at link time. The table grows the binary size by about 0.5% of the text segment. We impose a lower limit of 16 bytes on any function, which should not have much of an impact. (The real constraint required is <=256 functions in every 4096 bytes, but 16 bytes/function is easier to implement.) Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca Reviewed-on: https://go-review.googlesource.com/2097Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This implements support for calls to and from C in the ppc64 C ABI, as well as supporting functionality such as an entry point from the dynamic linker. Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a Reviewed-on: https://go-review.googlesource.com/2009Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Cgo will need this for calls from C to Go and for handling signals that may occur in C code. Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada Reviewed-on: https://go-review.googlesource.com/2008Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
R13 is the C TLS pointer. Once we're calling to and from C code, if we clobber R13 in our code, sigtramp won't know whether to get the current g from REGG or from C TLS. The simplest solution is for Go code to preserve the C TLS pointer. This is equivalent to what other platforms do, except that on other platforms the TLS pointer is in a special register. Change-Id: I076e9cb83fd78843eb68cb07c748c4705c9a4c82 Reviewed-on: https://go-review.googlesource.com/2007Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This implements the ELF relocations and dynamic linking tables necessary to support internal linking on ppc64. It also marks ppc64le ELF files as ABI v2; failing to do this doesn't seem to confuse the loader, but it does confuse libbfd (and hence gdb, objdump, etc). Change-Id: I559dddf89b39052e1b6288a4dd5e72693b5355e4 Reviewed-on: https://go-review.googlesource.com/2006Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Most ppc64 relocations come in six or more variants where the basic relocation formula is the same, but which bits of the computed value are installed where changes. Introduce the concept of "variants" for internal relocations to support this. Since this applies to architecture-independent relocation types like R_PCREL, we do this in relocsym. Currently there is only an identity variant. A later CL that adds support for ppc64 ELF relocations will introduce more. Change-Id: I0c5f0e7dbe5beece79cd24fe36267d37c52f1a0c Reviewed-on: https://go-review.googlesource.com/2005Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
ppc64 has a bunch of these. Change-Id: I3b93ed2bae378322a8dec036b1681e520b56ff53 Reviewed-on: https://go-review.googlesource.com/2003Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Austin Clements authored
ppc64 function symbols have both a global entry point and a local entry point, where the difference is stashed in sym.other. We'll need this information to generate calls to ELF ABI functions. Change-Id: Ibe343923f56801de7ebec29946c79690a9ffde57 Reviewed-on: https://go-review.googlesource.com/2002Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Keith Randall authored
update #9401 Change-Id: I634a772814e7cd066f631a68342e7c3dc9d27e72 Reviewed-on: https://go-review.googlesource.com/2370Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
Cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks will be allocated directly. There is no point in cacheing 32KB+ stacks as we ask for and return 32KB at a time from the allocator. Note that the minimum stack is 8K on windows/64bit and 4K on windows/32bit and plan9. For these os/arch combinations, the number of stack orders is less so that we have the same maximum cached size. Fixes #9045 Change-Id: Ia4195dd1858fb79fc0e6a91ae29c374d28839e44 Reviewed-on: https://go-review.googlesource.com/2098Reviewed-by: Russ Cox <rsc@golang.org>
-
Oling Cat authored
Change-Id: I7238ae84d637534a345e5d077b8c63466148bd75 Reviewed-on: https://go-review.googlesource.com/1521Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
The ones at the end of M and G are just used to compute their size for use in assembly. Generate the size explicitly. The one at the end of itab is variable-sized, and at least one. The ones at the end of interfacetype and uncommontype are not needed, as the preceding slice references them (the slice was originally added for use by reflect?). The one at the end of stackmap is already accessed correctly, and the runtime never allocates one. Update #9401 Change-Id: Ia75e3aaee38425f038c506868a17105bd64c712f Reviewed-on: https://go-review.googlesource.com/2420Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
Fold in some startup randomness to make the hash vary across different runs. This helps prevent attackers from choosing keys that all map to the same bucket. Also, reorganize the hash a bit. Move the *m1 multiply to after the xor of the current hash and the message. For hash quality it doesn't really matter, but for DDOS resistance it helps a lot (any processing done to the message before it is merged with the random seed is useless, as it is easily inverted by an attacker). Update #9365 Change-Id: Ib19968168e1bbc541d1d28be2701bb83e53f1e24 Reviewed-on: https://go-review.googlesource.com/2344Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Matthew Dempsky authored
The gc toolchain no longer includes a C compiler, so mentions of "6c" can be removed or replaced by 6g as appropriate. Similarly, some cgo functions that previously generated C source output no longer need to. Change-Id: I1ae6b02630cff9eaadeae6f3176c0c7824e8fbe5 Reviewed-on: https://go-review.googlesource.com/2391Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Brad Fitzpatrick authored
Change-Id: I315b338968cb1d9298664d181de44a691b325bb8 Reviewed-on: https://go-review.googlesource.com/2450Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Brad Fitzpatrick authored
Reader.Discard is the complement to Peek. It discards the next n bytes of input. We already have Reader.Buffered to see how many bytes of data are sitting available in memory, and Reader.Peek to get that that buffer directly. But once you're done with the Peek'd data, you can't get rid of it, other than Reading it. Both Read and io.CopyN(ioutil.Discard, bufReader, N) are relatively slow. People instead resort to multiple blind ReadByte calls, just to advance the internal b.r variable. I've wanted this previously, several people have asked for it in the past on golang-nuts/dev, and somebody just asked me for it again in a private email. There are a few places in the standard library we'd use it too. Change-Id: I85dfad47704a58bd42f6867adbc9e4e1792bc3b0 Reviewed-on: https://go-review.googlesource.com/2260Reviewed-by: Russ Cox <rsc@golang.org>
-
Shenghou Ma authored
This CL only fixes the build, there are two failing tests: RaceMapBigValAccess1 and RaceMapBigValAccess2 in runtime/race tests. I haven't investigated why yet. Updates #9516. Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f Reviewed-on: https://go-review.googlesource.com/2401Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 06 Jan, 2015 4 commits
-
-
Martin Möhrmann authored
Use direct binary insertion instead of recursive calls to symMerge when one of the blocks has only one element. benchmark old ns/op new ns/op delta BenchmarkStableString1K 421999 397629 -5.77% BenchmarkStableInt1K 123422 120592 -2.29% BenchmarkStableInt64K 9629094 9620200 -0.09% BenchmarkStable1e2 123089 120209 -2.34% BenchmarkStable1e4 39505228 36870029 -6.67% BenchmarkStable1e6 8196612367 7630840157 -6.90% Change-Id: I49905a909e8595cfa05920ccf9aa00a8f3036110 Reviewed-on: https://go-review.googlesource.com/2219Reviewed-by: Robert Griesemer <gri@golang.org>
-
Russ Cox authored
sysReserve doesn't actually reserve the full amount requested on 64-bit systems, because of problems with ulimit. Instead it checks that it can get the first 64 kB and assumes it can grab the rest as needed. This doesn't work well with the "let the kernel pick an address" mode, so don't do that. Pick a high address instead. Change-Id: I4de143a0e6fdeb467fa6ecf63dcd0c1c1618a31c Reviewed-on: https://go-review.googlesource.com/2345Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
The line 'mp.schedlink = mnext' has an implicit write barrier call, which needs a valid g. Move it above the setg(nil). Change-Id: If3e86c948e856e10032ad89f038bf569659300e0 Reviewed-on: https://go-review.googlesource.com/2347Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
This test is doing pointer graph manipulation from C, and we cannot support that with concurrent GC. The wbshadow mode correctly diagnoses missing write barriers. Disable the test in that mode for now. There is a bigger issue behind it, namely SWIG, but for now we are focused on making all.bash pass with wbshadow enabled. Change-Id: I55891596d4c763e39b74082191d4a5fac7161642 Reviewed-on: https://go-review.googlesource.com/2346Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-