- 24 Jun, 2014 5 commits
-
-
Rob Pike authored
R=gri CC=golang-codereviews https://golang.org/cl/104340043
-
Dave Cheney authored
This CL re-applies the tests added in CL 101330053 and subsequently rolled back in CL 102610043. The original author of this change was Rui Ueyama <ruiu@google.com> LGTM=r, ruiu R=ruiu, r CC=golang-codereviews https://golang.org/cl/109170043
-
Josh Bleecher Snyder authored
The number of estimated iterations required to reach the benchtime is multiplied by a safety margin (to avoid falling just short) and then rounded up to a readable number. With an accurate estimate, in the worse case, the resulting number of iterations could be 3.75x more than necessary: 1.5x for safety * 2.5x to round up (e.g. from 2eX+1 to 5eX). This CL reduces the safety margin to 1.2x. Experimentation showed a diminishing margin of return past 1.2x, although the average case continued to show improvements down to 1.05x. This CL also reduces the maximum round-up multiplier from 2.5x (from 2eX+1 to 5eX) to 2x, by allowing the number of iterations to be of the form 3eX. Both changes improve benchmark wall clock times, and the effects are cumulative. From 1.5x to 1.2x safety margin: package old s new s delta bytes 163 125 -23% encoding/json 27 21 -22% net/http 42 36 -14% runtime 463 418 -10% strings 82 65 -21% Allowing 3eX iterations: package old s new s delta bytes 163 134 -18% encoding/json 27 23 -15% net/http 42 36 -14% runtime 463 422 -9% strings 82 72 -12% Combined: package old s new s delta bytes 163 112 -31% encoding/json 27 20 -26% net/http 42 30 -29% runtime 463 346 -25% strings 82 60 -27% LGTM=crawshaw, r, rsc R=golang-codereviews, crawshaw, r, rsc CC=golang-codereviews https://golang.org/cl/105990045
-
Robert Obryk authored
The previous call to parseRange already checks whether all the ranges start before the end of file. LGTM=robert.hencke, bradfitz R=golang-codereviews, robert.hencke, gobot, bradfitz CC=golang-codereviews https://golang.org/cl/91880044
-
Mikio Hara authored
Updates z-files from 10.7 kernel-based to 10.9 kernel-based. LGTM=iant R=golang-codereviews, bradfitz, iant CC=golang-codereviews https://golang.org/cl/102610045
-
- 23 Jun, 2014 7 commits
-
-
Dave Cheney authored
LGTM=iant R=ruiu, iant CC=golang-codereviews https://golang.org/cl/107320044
-
Dave Cheney authored
Update #1435 This proposal disables Setuid and Setgid on all linux platforms. Issue 1435 has been open for a long time, and it is unlikely to be addressed soon so an argument was made by a commenter https://code.google.com/p/go/issues/detail?id=1435#c45 That these functions should made to fail rather than succeed in their broken state. LGTM=ruiu, iant R=iant, ruiu CC=golang-codereviews https://golang.org/cl/106170043
-
Mikio Hara authored
Update #8266 LGTM=iant R=golang-codereviews, iant CC=golang-codereviews https://golang.org/cl/104290043
-
Rui Ueyama authored
MOV with SSE registers seems faster than REP MOVSQ if the size being copied is less than about 2K. Previously we didn't use MOV if the memory region is larger than 256 byte. This patch improves the performance of 257 ~ 2048 byte non-overlapping copy by using MOV. Here is the benchmark result on Intel Xeon 3.5GHz (Nehalem). benchmark old ns/op new ns/op delta BenchmarkMemmove16 4 4 +0.42% BenchmarkMemmove32 5 5 -0.20% BenchmarkMemmove64 6 6 -0.81% BenchmarkMemmove128 7 7 -0.82% BenchmarkMemmove256 10 10 +1.92% BenchmarkMemmove512 29 16 -44.90% BenchmarkMemmove1024 37 25 -31.55% BenchmarkMemmove2048 55 44 -19.46% BenchmarkMemmove4096 92 91 -0.76% benchmark old MB/s new MB/s speedup BenchmarkMemmove16 3370.61 3356.88 1.00x BenchmarkMemmove32 6368.68 6386.99 1.00x BenchmarkMemmove64 10367.37 10462.62 1.01x BenchmarkMemmove128 17551.16 17713.48 1.01x BenchmarkMemmove256 24692.81 24142.99 0.98x BenchmarkMemmove512 17428.70 31687.72 1.82x BenchmarkMemmove1024 27401.82 40009.45 1.46x BenchmarkMemmove2048 36884.86 45766.98 1.24x BenchmarkMemmove4096 44295.91 44627.86 1.01x LGTM=khr R=golang-codereviews, gobot, khr CC=golang-codereviews https://golang.org/cl/90500043
-
Mikio Hara authored
Also exposes common socket option functions on Solaris. Update #7174 Update #7175 LGTM=aram R=golang-codereviews, aram CC=golang-codereviews https://golang.org/cl/107280044
-
Mikio Hara authored
LGTM=dave R=golang-codereviews, dave CC=golang-codereviews https://golang.org/cl/110020050
-
Rui Ueyama authored
paeth(0, x, 0) == x for any uint8 value. LGTM=nigeltao R=golang-codereviews, bradfitz, nigeltao CC=golang-codereviews https://golang.org/cl/105290049
-
- 22 Jun, 2014 3 commits
-
-
Rui Ueyama authored
sync.Pool is not supposed to be used everywhere, but is a last resort. ««« original CL description strings: use sync.Pool to cache buffer benchmark old ns/op new ns/op delta BenchmarkByteReplacerWriteString 3596 3094 -13.96% benchmark old allocs new allocs delta BenchmarkByteReplacerWriteString 1 0 -100.00% LGTM=dvyukov R=bradfitz, dave, dvyukov CC=golang-codereviews https://golang.org/cl/101330053 »»» LGTM=dave R=r, dave CC=golang-codereviews https://golang.org/cl/102610043
-
Dave Cheney authored
Fixes #8074. The issue was not reproduceable by revision go version devel +e0ad7e329637 Thu Jun 19 22:19:56 2014 -0700 linux/arm But include the original test case in case the issue reopens itself. LGTM=dvyukov R=golang-codereviews, dvyukov CC=golang-codereviews https://golang.org/cl/107290043
-
Rui Ueyama authored
benchmark old ns/op new ns/op delta BenchmarkByteReplacerWriteString 3596 3094 -13.96% benchmark old allocs new allocs delta BenchmarkByteReplacerWriteString 1 0 -100.00% LGTM=dvyukov R=bradfitz, dave, dvyukov CC=golang-codereviews https://golang.org/cl/101330053
-
- 21 Jun, 2014 4 commits
-
-
Dmitriy Vyukov authored
R=golang-codereviews CC=golang-codereviews https://golang.org/cl/103520044
-
Dmitriy Vyukov authored
LGTM=ruiu R=golang-codereviews, ruiu CC=golang-codereviews https://golang.org/cl/109100046
-
Dmitriy Vyukov authored
LGTM=dave R=golang-codereviews, dave CC=golang-codereviews https://golang.org/cl/102580043
-
Dmitriy Vyukov authored
All tests pass except one test in regexp package. LGTM=iant R=golang-codereviews, iant, dave CC=golang-codereviews https://golang.org/cl/107270043
-
- 20 Jun, 2014 7 commits
-
-
Dmitriy Vyukov authored
It was built on an old, bogus revision. LGTM=minux TBR=iant R=iant, minux CC=golang-codereviews https://golang.org/cl/101370052
-
Dmitriy Vyukov authored
This requires minimal changes to the runtime hooks. In particular, synchronization events must be done only on valid addresses now, so I've added the additional checks to race.c. LGTM=iant R=iant CC=golang-codereviews https://golang.org/cl/101000046
-
Rui Ueyama authored
benchmark old ns/op new ns/op delta BenchmarkByteReplacerWriteString 7359 3661 -50.25% LGTM=dave R=golang-codereviews, dave CC=golang-codereviews https://golang.org/cl/102550043
-
Dmitriy Vyukov authored
Fixes #7858. LGTM=ruiu R=ruiu CC=golang-codereviews https://golang.org/cl/92720045
-
Dmitriy Vyukov authored
Single-case select with a non-nil channel is pointless. LGTM=mikioh.mikioh R=mikioh.mikioh CC=golang-codereviews https://golang.org/cl/103920044
-
Dmitriy Vyukov authored
Afterprologue check was required when did not know about return arguments of functions and/or they were not zeroed. Now 100% precision is required for stacks due to stack copying, so it must work w/o afterprologue one way or another. I can limit this change for 1.3 to merely adding a TODO, but this check is super confusing so I don't want this knowledge to get lost. LGTM=rsc R=golang-codereviews, gobot, rsc, khr CC=golang-codereviews, khr, rsc https://golang.org/cl/96580045
-
Rui Ueyama authored
LGTM=dave R=golang-codereviews, bradfitz, dave CC=golang-codereviews https://golang.org/cl/109090048
-
- 19 Jun, 2014 8 commits
-
-
Rui Ueyama authored
Avoid unnecessary bitwise-OR operations. benchmark old MB/s new MB/s speedup BenchmarkEncodeToStringBase64 179.02 205.74 1.15x BenchmarkEncodeToStringBase32 155.86 167.82 1.08x LGTM=iant R=golang-codereviews, iant CC=golang-codereviews https://golang.org/cl/109090043
-
Rui Ueyama authored
Use WriteString instead of allocating a byte slice as a buffer. This was a TODO. benchmark old ns/op new ns/op delta BenchmarkWriteString 40139 19991 -50.20% LGTM=bradfitz R=golang-codereviews, bradfitz CC=golang-codereviews https://golang.org/cl/107190044
-
Bill Thiede authored
Fixes #8201. LGTM=nigeltao R=nigeltao CC=golang-codereviews https://golang.org/cl/105990046
-
Caleb Spare authored
LGTM=bradfitz R=golang-codereviews, bradfitz CC=golang-codereviews https://golang.org/cl/102830043
-
Nigel Tao authored
requires a decoder to do its own byte buffering instead of using bufio.Reader, due to byte stuffing. benchmark old MB/s new MB/s speedup BenchmarkDecodeBaseline 33.40 50.65 1.52x BenchmarkDecodeProgressive 24.34 31.92 1.31x On 6g, unsafe.Sizeof(huffman{}) falls from 4872 to 964 bytes, and the decoder struct contains 8 of those. LGTM=r R=r, nightlyone CC=bradfitz, couchmoney, golang-codereviews, raph https://golang.org/cl/109050045
-
Andrew Gerrand authored
LGTM=minux R=golang-codereviews, minux CC=golang-codereviews https://golang.org/cl/107200043
-
Andrew Gerrand authored
This is a clone of 101370043, which I accidentally applied to the release branch first. No big deal, it needed to be applied there anyway. LGTM=r R=r CC=golang-codereviews https://golang.org/cl/108090043
-
ChaiShushan authored
Fixes #7694. LGTM=nigeltao, rsc, r R=golang-codereviews, nigeltao, rsc, r CC=golang-codereviews https://golang.org/cl/109000049
-
- 18 Jun, 2014 6 commits
-
-
Rui Ueyama authored
Storing temporary values to a slice is slower than storing them to local variables of type byte. benchmark old MB/s new MB/s speedup BenchmarkEncodeToStringBase32 102.21 156.66 1.53x BenchmarkEncodeToStringBase64 124.25 177.91 1.43x LGTM=crawshaw R=golang-codereviews, crawshaw, bradfitz, dave CC=golang-codereviews https://golang.org/cl/109820045
-
Robert Dinu authored
Fixes #8175. LGTM=r R=golang-codereviews, r, gobot CC=golang-codereviews https://golang.org/cl/103320043
-
Rob Pike authored
Just to be more thorough. No need to push this to 1.3; it's just a test change that worked without any changes to the code being tested. LGTM=crawshaw R=golang-codereviews, crawshaw CC=golang-codereviews https://golang.org/cl/109080045
-
David Symonds authored
LGTM=bradfitz R=adg, rsc, bradfitz CC=golang-codereviews https://golang.org/cl/102470045
-
Rui Ueyama authored
genericReplacer.lookup is called for each byte of an input string. In many (most?) cases, lookup will fail for the first byte, and it will return immediately. Adding a fast path for that case seems worth it. Benchmark on my Xeon 3.5GHz Linux box: benchmark old ns/op new ns/op delta BenchmarkGenericNoMatch 2691 774 -71.24% BenchmarkGenericMatch1 7920 8151 +2.92% BenchmarkGenericMatch2 52336 39927 -23.71% BenchmarkSingleMaxSkipping 1575 1575 +0.00% BenchmarkSingleLongSuffixFail 1429 1429 +0.00% BenchmarkSingleMatch 56228 55444 -1.39% BenchmarkByteByteNoMatch 568 568 +0.00% BenchmarkByteByteMatch 977 972 -0.51% BenchmarkByteStringMatch 1669 1687 +1.08% BenchmarkHTMLEscapeNew 422 422 +0.00% BenchmarkHTMLEscapeOld 692 670 -3.18% BenchmarkByteByteReplaces 8492 8474 -0.21% BenchmarkByteByteMap 2817 2808 -0.32% LGTM=rsc R=golang-codereviews, bradfitz, dave, rsc CC=golang-codereviews https://golang.org/cl/79200044
-
Keith Randall authored
Make assembly govet-clean. Clean up fixes for CL 93380044. LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/107160047
-