- 17 Oct, 2014 8 commits
-
-
Dmitriy Vyukov authored
Don't use cmd/pprof as it is not necessary installed and does not work on nacl and plan9. Instead just look at the raw profile. LGTM=crawshaw, rsc R=golang-codereviews, crawshaw, 0intro, rsc CC=golang-codereviews https://golang.org/cl/159010043
-
Russ Cox authored
Better to avoid the memory loads and just use immediate constants. This especially applies to zeroing, which was being done by copying zeros from elsewhere in the binary, even if the value was going to be completely initialized with non-zero values. The zero writes were optimized away but the zero loads from the data segment were not. LGTM=r R=r, bradfitz, dvyukov CC=golang-codereviews https://golang.org/cl/152700045
-
Russ Cox authored
Replace i < 0 || i >= x with uint(i) >= uint(x). Shorten a few other code sequences. Move the kind bits to the bottom of the flag word, to avoid shifts. LGTM=r R=r, bradfitz CC=golang-codereviews https://golang.org/cl/159020043
-
Rob Pike authored
We borrow a trick from the fmt package and avoid reflection to walk the elements when possible. We could push further with unsafe (and we may) but this is a good start. Decode can benefit similarly; it will be done separately. Use go generate (engen.go) to produce the helper functions (enc_helpers.go). benchmark old ns/op new ns/op delta BenchmarkEndToEndPipe 6593 6482 -1.68% BenchmarkEndToEndByteBuffer 3662 3684 +0.60% BenchmarkEndToEndSliceByteBuffer 350306 351693 +0.40% BenchmarkComplex128Slice 96347 80045 -16.92% BenchmarkInt32Slice 42484 26008 -38.78% BenchmarkFloat64Slice 51143 36265 -29.09% BenchmarkStringSlice 53402 35077 -34.32% LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/156310043
-
Russ Cox authored
gogo called from GC is okay for the same reasons that gogo called from System or ExternalCode is okay. All three are fake stack traces. Fixes #8408. LGTM=dvyukov, r R=r, dvyukov CC=golang-codereviews https://golang.org/cl/152580043
-
Russ Cox authored
This doesn't actually do anything. Maybe it will some day, but maybe not. TBR=r CC=golang-codereviews https://golang.org/cl/155490043
-
Brad Fitzpatrick authored
LGTM=iant R=golang-codereviews, iant CC=dvyukov, golang-codereviews, jamesr, nigeltao https://golang.org/cl/155530043
-
Russ Cox authored
Dmitriy believes this broke Windows. It looks like build.golang.org stopped before that, but it's worth a shot. ««« original CL description runtime: make pprof a little nicer Update #8942 This does not fully address issue 8942 but it does make the profiles much more useful, until that issue can be fixed completely. LGTM=dvyukov R=r, dvyukov CC=golang-codereviews https://golang.org/cl/159990043 »»» TBR=dvyukov CC=golang-codereviews https://golang.org/cl/160030043
-
- 16 Oct, 2014 9 commits
-
-
Robert Griesemer authored
Fixes #8496. LGTM=rsc, r, iant R=r, rsc, iant, ken CC=golang-codereviews https://golang.org/cl/148580043
-
Damien Neil authored
LGTM=r R=r CC=golang-codereviews https://golang.org/cl/160920045
-
Damien Neil authored
LGTM=r R=r CC=golang-codereviews https://golang.org/cl/159920044
-
David du Colombier authored
Fixes #8849. LGTM=bradfitz, aram R=bradfitz, rsc, aram CC=golang-codereviews https://golang.org/cl/158970045
-
Russ Cox authored
It cannot run 'go tool pprof'. There is no guarantee that's installed. It needs to build a temporary pprof binary and run that. It also needs to skip the test on systems that can't build and run binaries, namely android and nacl. See src/cmd/nm/nm_test.go's TestNM for a template. Update #8867 Status: Accepted TBR=dvyukov CC=golang-codereviews https://golang.org/cl/153710043
-
Russ Cox authored
Update #8942 This does not fully address issue 8942 but it does make the profiles much more useful, until that issue can be fixed completely. LGTM=dvyukov R=r, dvyukov CC=golang-codereviews https://golang.org/cl/159990043
-
Dmitriy Vyukov authored
There are 3 issues: 1. Skip argument of callers is off by 3, so that all allocations are deep inside of memory profiler. 2. Memory profiling statistics are not updated after runtime.GC. 3. Testing package does not update memory profiling statistics before capturing the profile. Also add an end-to-end test. Fixes #8867. LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/148710043
-
Russ Cox authored
Both of these forms can avoid writing to the base pointer in x (in the slice, always, and in the append, most of the time). For Go 1.5, will need to change the compilation of x = x[0:y] to avoid writing to the base pointer, so that the elision is safe, and will need to change the compilation of x = append(x, ...) to write to the base pointer (through a barrier) only when growing the underlying array, so that the general elision is safe. For Go 1.4, elide the write barrier always, a change that should have equivalent performance characteristics but is much simpler and therefore safer. benchmark old ns/op new ns/op delta BenchmarkBinaryTree17 3910526122 3918802545 +0.21% BenchmarkFannkuch11 3747650699 3732600693 -0.40% BenchmarkFmtFprintfEmpty 106 98.7 -6.89% BenchmarkFmtFprintfString 280 269 -3.93% BenchmarkFmtFprintfInt 296 282 -4.73% BenchmarkFmtFprintfIntInt 467 470 +0.64% BenchmarkFmtFprintfPrefixedInt 418 398 -4.78% BenchmarkFmtFprintfFloat 574 535 -6.79% BenchmarkFmtManyArgs 1768 1818 +2.83% BenchmarkGobDecode 14916799 14925182 +0.06% BenchmarkGobEncode 14110076 13358298 -5.33% BenchmarkGzip 546609795 542630402 -0.73% BenchmarkGunzip 136270657 136496277 +0.17% BenchmarkHTTPClientServer 126574 125245 -1.05% BenchmarkJSONEncode 30006238 27862354 -7.14% BenchmarkJSONDecode 106020889 102664600 -3.17% BenchmarkMandelbrot200 5793550 5818320 +0.43% BenchmarkGoParse 5437608 5463962 +0.48% BenchmarkRegexpMatchEasy0_32 192 179 -6.77% BenchmarkRegexpMatchEasy0_1K 462 460 -0.43% BenchmarkRegexpMatchEasy1_32 168 153 -8.93% BenchmarkRegexpMatchEasy1_1K 1420 1280 -9.86% BenchmarkRegexpMatchMedium_32 338 286 -15.38% BenchmarkRegexpMatchMedium_1K 107435 98027 -8.76% BenchmarkRegexpMatchHard_32 5941 4846 -18.43% BenchmarkRegexpMatchHard_1K 185965 153830 -17.28% BenchmarkRevcomp 795497458 798447829 +0.37% BenchmarkTemplate 132091559 134938425 +2.16% BenchmarkTimeParse 604 608 +0.66% BenchmarkTimeFormat 551 548 -0.54% LGTM=r R=r, dave CC=golang-codereviews, iant, khr, rlh https://golang.org/cl/159960043
-
Adam Langley authored
A new attack on CBC padding in SSLv3 was released yesterday[1]. Go only supports SSLv3 as a server, not as a client. An easy fix is to change the default minimum version to TLS 1.0 but that seems a little much this late in the 1.4 process as it may break some things. Thus this patch adds server support for TLS_FALLBACK_SCSV[2] -- a mechanism for solving the fallback problem overall. Chrome has implemented this since February and Google has urged others to do so in light of yesterday's news. With this change, clients can indicate that they are doing a fallback connection and Go servers will be able to correctly reject them. [1] http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html [2] https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00 LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/157090043
-
- 15 Oct, 2014 19 commits
-
-
Russ Cox authored
Among other things, *x = T{} does not need a write barrier. The changes here avoid an unnecessary copy even when no pointers are involved, so it may have larger effects. In 6g and 8g, avoid manually repeated STOSQ in favor of writing explicit MOVs, under the theory that the MOVs should have fewer dependencies and pipeline better. Benchmarks compare best of 5 on a 2012 MacBook Pro Core i5 with TurboBoost disabled. Most improvements can be explained by the changes in this CL. The effect in Revcomp is real but harder to explain: none of the instructions in the inner loop changed. I suspect loop alignment but really have no idea. benchmark old new delta BenchmarkBinaryTree17 3809027371 3819907076 +0.29% BenchmarkFannkuch11 3607547556 3686983012 +2.20% BenchmarkFmtFprintfEmpty 118 103 -12.71% BenchmarkFmtFprintfString 289 277 -4.15% BenchmarkFmtFprintfInt 304 290 -4.61% BenchmarkFmtFprintfIntInt 507 458 -9.66% BenchmarkFmtFprintfPrefixedInt 425 408 -4.00% BenchmarkFmtFprintfFloat 555 555 +0.00% BenchmarkFmtManyArgs 1835 1733 -5.56% BenchmarkGobDecode 14738209 14639331 -0.67% BenchmarkGobEncode 14239039 13703571 -3.76% BenchmarkGzip 538211054 538701315 +0.09% BenchmarkGunzip 135430877 134818459 -0.45% BenchmarkHTTPClientServer 116488 116618 +0.11% BenchmarkJSONEncode 28923406 29294334 +1.28% BenchmarkJSONDecode 105779820 104289543 -1.41% BenchmarkMandelbrot200 5791758 5771964 -0.34% BenchmarkGoParse 5376642 5310943 -1.22% BenchmarkRegexpMatchEasy0_32 195 190 -2.56% BenchmarkRegexpMatchEasy0_1K 477 455 -4.61% BenchmarkRegexpMatchEasy1_32 170 165 -2.94% BenchmarkRegexpMatchEasy1_1K 1410 1394 -1.13% BenchmarkRegexpMatchMedium_32 336 329 -2.08% BenchmarkRegexpMatchMedium_1K 108979 106328 -2.43% BenchmarkRegexpMatchHard_32 5854 5821 -0.56% BenchmarkRegexpMatchHard_1K 185089 182838 -1.22% BenchmarkRevcomp 834920364 780202624 -6.55% BenchmarkTemplate 137046937 129728756 -5.34% BenchmarkTimeParse 600 594 -1.00% BenchmarkTimeFormat 559 539 -3.58% LGTM=r R=r CC=golang-codereviews, iant, khr, rlh https://golang.org/cl/157910047
-
Nigel Tao authored
LGTM=r R=r CC=golang-codereviews https://golang.org/cl/157080043
-
Chris Manghane authored
Fixes #8828. LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/154410043
-
Russ Cox authored
The general writebarrierfat needs a temporary for src, because we need to pass the address of the temporary to the writebarrierfat routine. But the new fixed-size ones pass the value directly and don't need to introduce the temporary. Magnifies some of the effect of the custom write barrier change. Comparing best of 5 with TurboBoost turned off, on a 2012 Retina MacBook Pro Core i5. Still not completely confident in these numbers, but the fmt, regexp, and revcomp improvements seem real. benchmark old ns/op new ns/op delta BenchmarkBinaryTree17 3942965521 3929654940 -0.34% BenchmarkFannkuch11 3707543350 3699566011 -0.22% BenchmarkFmtFprintfEmpty 119 119 +0.00% BenchmarkFmtFprintfString 295 296 +0.34% BenchmarkFmtFprintfInt 313 314 +0.32% BenchmarkFmtFprintfIntInt 517 484 -6.38% BenchmarkFmtFprintfPrefixedInt 439 429 -2.28% BenchmarkFmtFprintfFloat 571 569 -0.35% BenchmarkFmtManyArgs 1899 1820 -4.16% BenchmarkGobDecode 15507208 15325649 -1.17% BenchmarkGobEncode 14811710 14715434 -0.65% BenchmarkGzip 561144467 549624323 -2.05% BenchmarkGunzip 137377667 137691087 +0.23% BenchmarkHTTPClientServer 126632 124717 -1.51% BenchmarkJSONEncode 29944112 29526629 -1.39% BenchmarkJSONDecode 108954913 107339551 -1.48% BenchmarkMandelbrot200 5828755 5821659 -0.12% BenchmarkGoParse 5577437 5521895 -1.00% BenchmarkRegexpMatchEasy0_32 198 193 -2.53% BenchmarkRegexpMatchEasy0_1K 486 469 -3.50% BenchmarkRegexpMatchEasy1_32 175 167 -4.57% BenchmarkRegexpMatchEasy1_1K 1450 1419 -2.14% BenchmarkRegexpMatchMedium_32 344 338 -1.74% BenchmarkRegexpMatchMedium_1K 112088 109855 -1.99% BenchmarkRegexpMatchHard_32 6078 6003 -1.23% BenchmarkRegexpMatchHard_1K 191166 187499 -1.92% BenchmarkRevcomp 854870445 799012851 -6.53% BenchmarkTemplate 141572691 141508105 -0.05% BenchmarkTimeParse 604 603 -0.17% BenchmarkTimeFormat 579 560 -3.28% LGTM=r R=r CC=golang-codereviews https://golang.org/cl/155450043
-
Russ Cox authored
scalar is no longer needed, now that interfaces always hold pointers. Comparing best of 5 with TurboBoost turned off, on a 2012 Retina MacBook Pro Core i5. Still not completely confident in these numbers, but the gob and template improvements seem real. benchmark old ns/op new ns/op delta BenchmarkBinaryTree17 3819892491 3803008185 -0.44% BenchmarkFannkuch11 3623876405 3611776426 -0.33% BenchmarkFmtFprintfEmpty 119 118 -0.84% BenchmarkFmtFprintfString 294 292 -0.68% BenchmarkFmtFprintfInt 310 304 -1.94% BenchmarkFmtFprintfIntInt 513 507 -1.17% BenchmarkFmtFprintfPrefixedInt 427 426 -0.23% BenchmarkFmtFprintfFloat 562 554 -1.42% BenchmarkFmtManyArgs 1873 1832 -2.19% BenchmarkGobDecode 15824504 14746565 -6.81% BenchmarkGobEncode 14347378 14208743 -0.97% BenchmarkGzip 537229271 537973492 +0.14% BenchmarkGunzip 134996775 135406149 +0.30% BenchmarkHTTPClientServer 119065 116937 -1.79% BenchmarkJSONEncode 29134359 28928099 -0.71% BenchmarkJSONDecode 106867289 105770161 -1.03% BenchmarkMandelbrot200 5798475 5791433 -0.12% BenchmarkGoParse 5299169 5379201 +1.51% BenchmarkRegexpMatchEasy0_32 195 195 +0.00% BenchmarkRegexpMatchEasy0_1K 477 477 +0.00% BenchmarkRegexpMatchEasy1_32 170 170 +0.00% BenchmarkRegexpMatchEasy1_1K 1412 1397 -1.06% BenchmarkRegexpMatchMedium_32 336 337 +0.30% BenchmarkRegexpMatchMedium_1K 109025 108977 -0.04% BenchmarkRegexpMatchHard_32 5854 5856 +0.03% BenchmarkRegexpMatchHard_1K 184914 184748 -0.09% BenchmarkRevcomp 829233526 836598734 +0.89% BenchmarkTemplate 142055312 137016166 -3.55% BenchmarkTimeParse 598 597 -0.17% BenchmarkTimeFormat 564 568 +0.71% Fixes #7425. LGTM=r R=golang-codereviews, r CC=golang-codereviews, iant, khr https://golang.org/cl/158890043
-
Russ Cox authored
LGTM=r R=r CC=golang-codereviews https://golang.org/cl/152640043
-
Russ Cox authored
A Go prototype can be used instead now, and the compiler will do a better job than we will doing it by hand. (We got it wrong in amd64p32, causing the current build breakage.) The auto-prototype-matching only applies to functions without an explicit package path, so the TEXT lines for reflectcall and callXX are s/runtime·/·/. LGTM=khr R=khr CC=golang-codereviews, iant, r https://golang.org/cl/153600043
-
Russ Cox authored
Fixes #7969. LGTM=bradfitz R=bradfitz CC=golang-codereviews https://golang.org/cl/158950043
-
Russ Cox authored
Fixes #7990. LGTM=iant, bradfitz R=bradfitz, iant, robryk CC=golang-codereviews https://golang.org/cl/156220043
-
Chris Manghane authored
Fixes #6606. LGTM=rsc R=rsc CC=golang-codereviews, gri https://golang.org/cl/156210044
-
Brad Fitzpatrick authored
The http package by default adds "Accept-Encoding: gzip" to outgoing requests, unless it's a bad idea, or the user requested otherwise. Only when the http package adds its own implicit Accept-Encoding header does the http package also transparently un-gzip the response. If the user requested part of a document (e.g. bytes 40 to 50), it appears that Github/Varnish send: range(gzip(content), 40, 50) And not: gzip(range(content, 40, 50)) The RFC 2616 set of replacements (with the purpose of clarifying ambiguities since 1999) has an RFC about Range requests (http://tools.ietf.org/html/rfc7233) but does not mention the interaction with encodings. Regardless of whether range(gzip(content)) or gzip(range(content)) is correct, this change prevents the Go package from asking for gzip in requests if we're also asking for Range, avoiding the issue. If the user cared, they can do it themselves. But Go transparently un-gzipping a fragment of gzip is never useful. Fixes #8923 LGTM=adg R=adg CC=golang-codereviews https://golang.org/cl/155420044
-
Brad Fitzpatrick authored
Fixes #8534 LGTM=adg R=adg CC=golang-codereviews https://golang.org/cl/149340044
-
Ian Lance Taylor authored
Fixes #8936. LGTM=bradfitz R=agl, bradfitz CC=golang-codereviews https://golang.org/cl/152590043
-
Russ Cox authored
The assembler could give a better error, but this one is good enough for now. Fixes #8880. LGTM=r R=r CC=golang-codereviews https://golang.org/cl/153610043
-
Jens Frederich authored
When the Import function in go/build encounters a directory without any buildable Go source files, it returns a handy NoGoError. Now if, instead it encounters multiple Go source files from multiple packages, it returns a handy MultiplePackageError. A new test for NoGoError and MultiplePackageError is also provided. Fixes #8286. LGTM=adg, rsc R=bradfitz, rsc, adg CC=golang-codereviews https://golang.org/cl/155050043
-
Russ Cox authored
The racewalk code was not updated for the new write barriers. Make it more future-proof. The new write barrier code assumed that +1 pointer would be aligned properly for any type that might follow, but that's not true on 32-bit systems where some types are 64-bit aligned. The only system like that today is nacl/amd64p32. Insert a dummy pointer so that the ambiguously typed value is at +2 pointers, which is always max-aligned. LGTM=r R=r CC=golang-codereviews, iant, khr https://golang.org/cl/158890046
-
Rob Pike authored
FieldByIndex never returns an invalid Value, so the validity test can be avoided if the field is not indirect. BenchmarkGobEncode 12768642 12424022 -2.70% BenchmarkGobEncode 60.11 61.78 1.03x LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/158890045
-
Chris Manghane authored
Fixes #7960. LGTM=rsc R=rsc CC=golang-codereviews, gri https://golang.org/cl/159800045
-
Alex Brainman authored
includes undo of 22318cd31d7d and also: - always use SetUnhandledExceptionFilter on windows-386; - crash when receive EXCEPTION_BREAKPOINT in exception handler. Fixes #8006. LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews https://golang.org/cl/155360043
-
- 14 Oct, 2014 4 commits
-
-
Keith Randall authored
The inverse is defined whenever the element and the modulus are relatively prime. The code already handles this situation, but the spec does not. Test that it does indeed work. Fixes #8875 LGTM=agl R=agl CC=golang-codereviews https://golang.org/cl/155010043
-
Russ Cox authored
Assignments of 2-, 3-, and 4-word values were handled by individual MOV instructions (and for scalars still are). But if there are pointers involved, those assignments now go through the write barrier routine. Before this CL, they went to writebarrierfat, which calls memmove. Memmove is too much overhead for these small amounts of data. Instead, call writebarrierfat{2,3,4}, which are specialized for the specific amount of data being copied. Today the write barrier does not care which words are pointers, so size alone is enough to distinguish the cases. If we keep these distinctions in Go 1.5 we will need to expand them for all the pointer-vs-scalar possibilities, so the current 3 functions will become 3+7+15 = 25, still not a large burden (we deleted more morestack functions than that when we dropped segmented stacks). BenchmarkBinaryTree17 3250972583 3123910344 -3.91% BenchmarkFannkuch11 3067605223 2964737839 -3.35% BenchmarkFmtFprintfEmpty 101 96.0 -4.95% BenchmarkFmtFprintfString 267 235 -11.99% BenchmarkFmtFprintfInt 261 253 -3.07% BenchmarkFmtFprintfIntInt 444 402 -9.46% BenchmarkFmtFprintfPrefixedInt 374 346 -7.49% BenchmarkFmtFprintfFloat 472 449 -4.87% BenchmarkFmtManyArgs 1537 1476 -3.97% BenchmarkGobDecode 13986528 12432985 -11.11% BenchmarkGobEncode 13120323 12537420 -4.44% BenchmarkGzip 451925758 437500578 -3.19% BenchmarkGunzip 113267612 110053644 -2.84% BenchmarkHTTPClientServer 103151 77100 -25.26% BenchmarkJSONEncode 25002733 23435278 -6.27% BenchmarkJSONDecode 94213717 82568789 -12.36% BenchmarkMandelbrot200 4804246 4713070 -1.90% BenchmarkGoParse 4646114 4379456 -5.74% BenchmarkRegexpMatchEasy0_32 163 158 -3.07% BenchmarkRegexpMatchEasy0_1K 433 391 -9.70% BenchmarkRegexpMatchEasy1_32 154 138 -10.39% BenchmarkRegexpMatchEasy1_1K 1481 1132 -23.57% BenchmarkRegexpMatchMedium_32 282 270 -4.26% BenchmarkRegexpMatchMedium_1K 92421 86149 -6.79% BenchmarkRegexpMatchHard_32 5209 4718 -9.43% BenchmarkRegexpMatchHard_1K 158141 147921 -6.46% BenchmarkRevcomp 699818791 642222464 -8.23% BenchmarkTemplate 132402383 108269713 -18.23% BenchmarkTimeParse 509 478 -6.09% BenchmarkTimeFormat 462 456 -1.30% LGTM=r R=r CC=golang-codereviews https://golang.org/cl/156200043
-
Russ Cox authored
Right now, go tool 6g -A fails complaining about 'any' type. TBR=r CC=golang-codereviews https://golang.org/cl/156200044
-
Keith Randall authored
Lowers gc pause time by 5-10% on test/bench/garbage LGTM=rsc, dvyukov R=rsc, dvyukov CC=golang-codereviews https://golang.org/cl/157810043
-