- 12 Oct, 2015 1 commit
-
-
Brad Fitzpatrick authored
These were proposed in the RFC over three years ago, then proposed to be added to Go in https://codereview.appspot.com/7678043/ 2 years and 7 months ago, and the spec hasn't been updated or retracted the whole time. Time to export them. Of note, HTTP/2 uses code 431 (Request Header Fields Too Large). Updates #12843 Change-Id: I78c2fed5fab9540a98e845ace73f21c430a48809 Reviewed-on: https://go-review.googlesource.com/15732Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
- 11 Oct, 2015 2 commits
-
-
Didier Spezia authored
A MIME header can include values defined on several lines. Only the first line of each value was trimmed. Make sure all the lines are trimmed before being aggregated. Fixes #11204 Change-Id: Id92f384044bc6c4ca836e5dba2081fe82c82dc85 Reviewed-on: https://go-review.googlesource.com/15683Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Nigel Tao authored
Fixes #12722 Change-Id: I6a630d8b072ef2b1c63de941743148f8c96b8e5f Reviewed-on: https://go-review.googlesource.com/15671Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 10 Oct, 2015 4 commits
-
-
Brad Fitzpatrick authored
Fixes #12674 Change-Id: I82f53026dd2fc27bd7999d43c27932d683d92af6 Reviewed-on: https://go-review.googlesource.com/15730Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Dave Cheney authored
Change-Id: I34c8ada1f3c5d401944483df424011fa2ae9fc3d Reviewed-on: https://go-review.googlesource.com/15673Reviewed-by: Dave Cheney <dave@cheney.net>
-
Dave Cheney authored
Fixes #12866 net/http.Client returns some errors wrapped in a *url.Error. To avoid the requirement to unwrap these errors to determine if the cause was temporary or a timeout, make *url.Error implement net.Error directly. Change-Id: I1ba84ecc7ad5147a40f056ff1254e60290152408 Reviewed-on: https://go-review.googlesource.com/15672Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
-
Joe Tsai authored
A vast majority of the time, ReadError isn't even returned during IO operations. Instead, an unwrapped error will be returned because of the ReadByte call on L705. Because DEFLATE streams are primarily compressed and require byte for byte Huffman decoding, most of the data read from a data stream will go through ReadByte. Although this is technically an API change, any user reliant on this error would not have worked properly anyways due to the fact that most IO error are not wrapped. We might as well deprecate ReadError. It is useless and actually makes clients that do depend on catching IO errors more difficult. Fixes #11856 Fixes #12724 Change-Id: Ib5fec5ae215e977c4e85de5701ce6a473d400af8 Reviewed-on: https://go-review.googlesource.com/14834Reviewed-by: Nigel Tao <nigeltao@golang.org>
-
- 09 Oct, 2015 16 commits
-
-
Ian Lance Taylor authored
The -W option has not worked since Go 1.3. It is not documented. When it did work, it generated useful output, but it was for human viewing; there was no reason to write a script that passes the -W option, so it's unlikely that anybody is using it today. Change-Id: I4769f1ffd308a48324a866592eb7fd79a4cdee54 Reviewed-on: https://go-review.googlesource.com/15701Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Robert Griesemer authored
Remove another use of NodeList. Change-Id: Ice07eff862caf715f722dec7829006bf71715b07 Reviewed-on: https://go-review.googlesource.com/15432Reviewed-by: Dave Cheney <dave@cheney.net>
-
Nodir Turakulov authored
All warnings in cmd/go are printed using fmt.Fprintf(os.Stderr...) except one in test.go which is printed using log.Printf. This is a minor inconsistency. Change-Id: Ib470d318810b44b86e6cfaa77e9a556a5ad94069 Reviewed-on: https://go-review.googlesource.com/15657 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Austin Clements authored
Currently, when the mutator allocates, the runtime first allocates the memory and then, if that G has done "enough" allocation, the runtime checks whether the G has assist debt to pay off and, if so, pays it off. This approach leads to under-assisting, where a G can allocate a large region (or many small regions) before paying for it, or can even exit with outstanding debt. This commit flips this around so that a G always acquires enough credit for an allocation before it can perform that allocation. We continue to amortize the cost of assists by requiring that they over-assist when triggered to build up credit for many allocations. Fixes #11967. Change-Id: Idac9f11133b328535667674d837be72c23ebd899 Reviewed-on: https://go-review.googlesource.com/15409Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently we track the per-G GC assist balance as two monotonically increasing values: the bytes allocated by the G this cycle (gcalloc) and the scan work performed by the G this cycle (gcscanwork). The assist balance is hence assistRatio*gcalloc - gcscanwork. This works, but has two important downsides: 1) It requires floating-point math to figure out if a G is in debt or not. This makes it inappropriate to check for assist debt in the hot path of mallocgc, so we only do this when a G allocates a new span. As a result, Gs can operate "in the red", leading to under-assist and extended GC cycle length. 2) Revising the assist ratio during a GC cycle can lead to an "assist burst". If you think of plotting the scan work performed versus heaps size, the assist ratio controls the slope of this line. However, in the current system, the target line always passes through 0 at the heap size that triggered GC, so if the runtime increases the assist ratio, there has to be a potentially large assist to jump from the current amount of scan work up to the new target scan work for the current heap size. This commit replaces this approach with directly tracking the GC assist balance in terms of allocation credit bytes. Allocating N bytes simply decreases this by N and assisting raises it by the amount of scan work performed divided by the assist ratio (to get back to bytes). This will make it cheap to figure out if a G is in debt, which will let us efficiently check if an assist is necessary *before* performing an allocation and hence keep Gs "in the black". This also fixes assist bursts because the assist ratio is now in terms of *remaining* work, rather than work from the beginning of the GC cycle. Hence, the plot of scan work versus heap size becomes continuous: we can revise the slope, but this slope always starts from where we are right now, rather than where we were at the beginning of the cycle. Change-Id: Ia821c5f07f8a433e8da7f195b52adfedd58bdf2c Reviewed-on: https://go-review.googlesource.com/15408Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we ensure a minimum heap distance of 1MB when computing the assist ratio. Rather than enforcing this minimum on the heap distance, it makes more sense to enforce that the heap goal itself is at least 1MB over the live heap size at the beginning of GC. Currently the two approaches are semantically equivalent, but this will let us switch to basing the assist ratio on current heap distance rather than the initial heap distance, since we can't enforce this minimum on the current heap distance (the GC may never finish because the goal posts will always be 1MB away). Change-Id: I0027b1c26a41a0152b01e5b67bdb1140d43ee903 Reviewed-on: https://go-review.googlesource.com/15604Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently, gcController.scanWork is updated as lazily as possible since it is only read at the end of the GC cycle. We're about to read it during the GC cycle to improve the assist ratio revisions, so modify gcDrain* to regularly flush to gcController.scanWork in much the same way as we regularly flush to gcController.bgScanCredit. One consequence of this is that it's difficult to keep gcw.scanWork monotonic, so we give up on that and simply return the amount of scan work done by gcDrainN rather than calculating it in the caller. Change-Id: I7b50acdc39602f843eed0b5c6d2dacd7e762b81d Reviewed-on: https://go-review.googlesource.com/15407Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently callers of gcDrain control whether it flushes scan work credit to gcController.bgScanCredit by passing a value other than -1 for the flush threshold. Shortly we're going to make this always flush scan work to gcController.scanWork and optionally also flush scan work to gcController.bgScanCredit. This will be much easier if the flush threshold is simply a constant (which it is in practice) and callers merely control whether or not the flush includes the background credit. Hence, replace the flush threshold argument with a flag. Change-Id: Ia27db17de8a3f1e462a5d7137d4b5dc72f99a04e Reviewed-on: https://go-review.googlesource.com/15406Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
These functions were nearly identical. Consolidate them by adding a flags argument. In addition to cleaning up this code, this makes further changes that affect both functions easier. Change-Id: I6ec5c947603bbbd3ff4040113b2fbc240e99745f Reviewed-on: https://go-review.googlesource.com/15405Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Change-Id: I950af8d80433b3ae8a1da0aa7a8d2d0b295dd313 Reviewed-on: https://go-review.googlesource.com/15404Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Change-Id: I312e56e95d8ef8ae036d16444ab1e2df1285845d Reviewed-on: https://go-review.googlesource.com/15403Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
The comment for assistRatio claimed it to be the reciprocal of what it actually is. Change-Id: If7f9bb853d75d0097facff3aa6704b224d9108b8 Reviewed-on: https://go-review.googlesource.com/15402Reviewed-by: Russ Cox <rsc@golang.org>
-
Nodir Turakulov authored
(*T)(unsafe.Pointer(&t)) === &t for t of type T Change-Id: I43c1aa436747dfa0bf4cb0d615da1647633f9536 Reviewed-on: https://go-review.googlesource.com/15656Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Charlie Dorian authored
Fixes #12867 Change-Id: I8ba81c622bce2a77a6142f941603198582eaf8a4 Reviewed-on: https://go-review.googlesource.com/15570Reviewed-by: Robert Griesemer <gri@golang.org>
-
mpl authored
The case fixed by this change happens when, in func (pr partReader) Read, the Peek happens to read so that peek looks like: "somedata\r\n--Boundary\r" peekBufferSeparatorIndex was returning (-1, false) because it didn't find the trailing '\n'. This was wrong because: 1) It didn't match the documentation: as "\r\n--Boundary" was found, it should return the index of that pattern, not -1. 2) It lead to an nCopy cut such as: "somedata\r| |\n--Boundary\r" instead of "somedata| |\r\n--Boundary\r" which made the subsequent Read miss the boundary, and eventually end with a "return 0, io.ErrUnexpectedEOF" case, as reported in: https://github.com/camlistore/camlistore/issues/642 Change-Id: I1ba78a741bc0c7719e160add9cca932d10f8a615 Reviewed-on: https://go-review.googlesource.com/15269Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
-
Brian Gitonga Marete authored
Change-Id: I51edc77ad4621ad8f3908e69dcb7b4eab086b5fe Reviewed-on: https://go-review.googlesource.com/15680Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 08 Oct, 2015 9 commits
-
-
Brad Fitzpatrick authored
Fixes #10342 Change-Id: I69c69352016a8fd0b62541128c2e86b242ebbe26 Reviewed-on: https://go-review.googlesource.com/15630Reviewed-by: Minux Ma <minux@golang.org>
-
Dave Cheney authored
obj.ProgInfo is a field inside obj.Prog, which is currently 320 bytes on 64bit platforms. By moving the Flags field below the other fields the size of obj.Prog drops into the 288 byte size class, a saving of 32 bytes per value allocated on the heap. Change-Id: If8bb12f45328996d7df1d0bac9d1c019d2af73bd Reviewed-on: https://go-review.googlesource.com/15522 Run-TryBot: Dave Cheney <dave@cheney.net> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Change-Id: Ic19c94fe0af55e17f6c2fcfd36085ccb1584da6f Reviewed-on: https://go-review.googlesource.com/15608Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Ian Lance Taylor authored
Go shared libraries do not support dlclose, and there is no likelihood that they will suppose dlclose in the future. Set the DF_1_NODELETE flag to tell the dynamic linker to not attempt to remove them from memory. This makes the shared library act as though every call to dlopen passed the RTLD_NODELETE flag. Fixes #12582. Update #11100. Update #12873. Change-Id: Id4b6e90a1b54e2e6fc8355b5fb22c5978fc762b4 Reviewed-on: https://go-review.googlesource.com/15605Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
-
Keith Randall authored
Improve the aeshash implementation to make it harder to engineer collisions. 1) Scramble the seed before xoring with the input string. This makes it harder to cancel known portions of the seed (like the size) because it mixes the per-table seed into those other parts. 2) Use table-dependent seeds for all stripes when hashing >16 byte strings. For small strings this change uses 4 aesenc ops instead of 3, so it is somewhat slower. The first two can run in parallel, though, so it isn't 33% slower. benchmark old ns/op new ns/op delta BenchmarkHash64-12 10.2 11.2 +9.80% BenchmarkHash16-12 5.71 6.13 +7.36% BenchmarkHash5-12 6.64 7.01 +5.57% BenchmarkHashBytesSpeed-12 30.3 31.9 +5.28% BenchmarkHash65536-12 2785 2882 +3.48% BenchmarkHash1024-12 53.6 55.4 +3.36% BenchmarkHashStringArraySpeed-12 54.9 56.5 +2.91% BenchmarkHashStringSpeed-12 18.7 19.2 +2.67% BenchmarkHashInt32Speed-12 14.8 15.1 +2.03% BenchmarkHashInt64Speed-12 14.5 14.5 +0.00% Change-Id: I59ea124b5cb92b1c7e8584008257347f9049996c Reviewed-on: https://go-review.googlesource.com/14124Reviewed-by: jcd . <jcd@golang.org> Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Michael Hudson-Doyle authored
Not only is this an obvious optimization: benchmark old MB/s new MB/s speedup BenchmarkMemmove1-4 35.35 29.65 0.84x BenchmarkMemmove2-4 63.78 52.53 0.82x BenchmarkMemmove3-4 89.72 73.96 0.82x BenchmarkMemmove4-4 109.94 95.73 0.87x BenchmarkMemmove5-4 127.60 112.80 0.88x BenchmarkMemmove6-4 143.59 126.67 0.88x BenchmarkMemmove7-4 157.90 138.92 0.88x BenchmarkMemmove8-4 167.18 231.81 1.39x BenchmarkMemmove9-4 175.23 252.07 1.44x BenchmarkMemmove10-4 165.68 261.10 1.58x BenchmarkMemmove11-4 174.43 263.31 1.51x BenchmarkMemmove12-4 180.76 267.56 1.48x BenchmarkMemmove13-4 189.06 284.93 1.51x BenchmarkMemmove14-4 186.31 284.72 1.53x BenchmarkMemmove15-4 195.75 281.62 1.44x BenchmarkMemmove16-4 202.96 439.23 2.16x BenchmarkMemmove32-4 264.77 775.77 2.93x BenchmarkMemmove64-4 306.81 1209.64 3.94x BenchmarkMemmove128-4 357.03 1515.41 4.24x BenchmarkMemmove256-4 380.77 2066.01 5.43x BenchmarkMemmove512-4 385.05 2556.45 6.64x BenchmarkMemmove1024-4 381.23 2804.10 7.36x BenchmarkMemmove2048-4 379.06 2814.83 7.43x BenchmarkMemmove4096-4 387.43 3064.96 7.91x BenchmarkMemmoveUnaligned1-4 28.91 25.40 0.88x BenchmarkMemmoveUnaligned2-4 56.13 47.56 0.85x BenchmarkMemmoveUnaligned3-4 74.32 69.31 0.93x BenchmarkMemmoveUnaligned4-4 97.02 83.58 0.86x BenchmarkMemmoveUnaligned5-4 110.17 103.62 0.94x BenchmarkMemmoveUnaligned6-4 124.95 113.26 0.91x BenchmarkMemmoveUnaligned7-4 142.37 130.82 0.92x BenchmarkMemmoveUnaligned8-4 151.20 205.64 1.36x BenchmarkMemmoveUnaligned9-4 166.97 215.42 1.29x BenchmarkMemmoveUnaligned10-4 148.49 221.22 1.49x BenchmarkMemmoveUnaligned11-4 159.47 239.57 1.50x BenchmarkMemmoveUnaligned12-4 163.52 247.32 1.51x BenchmarkMemmoveUnaligned13-4 167.55 256.54 1.53x BenchmarkMemmoveUnaligned14-4 175.12 251.03 1.43x BenchmarkMemmoveUnaligned15-4 192.10 267.13 1.39x BenchmarkMemmoveUnaligned16-4 190.76 378.87 1.99x BenchmarkMemmoveUnaligned32-4 259.02 562.98 2.17x BenchmarkMemmoveUnaligned64-4 317.72 842.44 2.65x BenchmarkMemmoveUnaligned128-4 355.43 1274.49 3.59x BenchmarkMemmoveUnaligned256-4 378.17 1815.74 4.80x BenchmarkMemmoveUnaligned512-4 362.15 2180.81 6.02x BenchmarkMemmoveUnaligned1024-4 376.07 2453.58 6.52x BenchmarkMemmoveUnaligned2048-4 381.66 2568.32 6.73x BenchmarkMemmoveUnaligned4096-4 398.51 2669.36 6.70x BenchmarkMemclr5-4 113.83 107.93 0.95x BenchmarkMemclr16-4 223.84 389.63 1.74x BenchmarkMemclr64-4 421.99 1209.58 2.87x BenchmarkMemclr256-4 525.94 2411.58 4.59x BenchmarkMemclr4096-4 581.66 4372.20 7.52x BenchmarkMemclr65536-4 565.84 4747.48 8.39x BenchmarkGoMemclr5-4 194.63 160.31 0.82x BenchmarkGoMemclr16-4 295.30 630.07 2.13x BenchmarkGoMemclr64-4 480.24 1884.03 3.92x BenchmarkGoMemclr256-4 540.23 2926.49 5.42x but it turns out that it's necessary to avoid the GC seeing partially written pointers. It's of course possible to be more sophisticated (using ldp/stp to move 16 bytes at a time in the core loop and unrolling the tail copying loops being the obvious ideas) but I wanted something simple and (reasonably) obviously correct. Fixes #12552 Change-Id: Iaeaf8a812cd06f4747ba2f792de1ded738890735 Reviewed-on: https://go-review.googlesource.com/14813Reviewed-by: Austin Clements <austin@google.com>
-
Alexander Morozov authored
Change-Id: I8e94fa57482149f6ea8f13d02ddcc82d6764ddb8 Reviewed-on: https://go-review.googlesource.com/15496Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Didier Spezia authored
Following the C to Go translation, some useless variables were left in the code. In fmt.go, this was harmless. In lex.go, it broke the error message related to non-canonical import paths. Fix it, and remove the useless variables. The added test case is ignored in the go/types tests, since the behavior of the non-canonical import path check seems to be different. Fixes #11362 Change-Id: Ic9129139ede90357dc79ebf167af638cf44536fa Reviewed-on: https://go-review.googlesource.com/15580Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com> Reviewed-by: Minux Ma <minux@golang.org> Run-TryBot: Minux Ma <minux@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Michael Hudson-Doyle authored
It's particularly nice to get rid of the android special cases in the linker. Change-Id: I516363af7ce8a6b2f196fe49cb8887ac787a6dad Reviewed-on: https://go-review.googlesource.com/14197 Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 07 Oct, 2015 4 commits
-
-
Charlie Dorian authored
Copy math package CL 12230 to cmplx package. Change-Id: I3345b782b84b5b98e2b6a60d8774c7e7cede2891 Reviewed-on: https://go-review.googlesource.com/15500Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Ian Lance Taylor authored
The self tests do not need to build the binary; they won't read it. The self tests should work on any ELF system. Use t.Skip instead of panic. Use internal/testenv. Don't worry about a space in the temporary directory name. Change-Id: I66ef0af90520d330820afa7b6c6b3a132ab27454 Reviewed-on: https://go-review.googlesource.com/15495 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Ian Lance Taylor authored
Update two tests for the recently submitted gccgo change https://golang.org/cl/14259. Change-Id: Ib18bc87ea512074aa91fd4096d0874b72e2243e5 Reviewed-on: https://go-review.googlesource.com/15493Reviewed-by: Chris Manghane <cmang@golang.org>
-
Nodir Turakulov authored
The <importPath>/_test directory is not actually created in -n mode, so `go test` fails to write _testmain.go. Do not write _testmain.go if -n is passed. Change-Id: I825d5040cacbc9d9a8c89443e5a3f83e6f210ce4 Reviewed-on: https://go-review.googlesource.com/15433Reviewed-by: Andrew Gerrand <adg@golang.org>
-
- 06 Oct, 2015 4 commits
-
-
Shenghou Ma authored
Update #12834. Change-Id: If7bbcc249517f2f2d8a7dcbba6411ede92331abe Reviewed-on: https://go-review.googlesource.com/15381Reviewed-by: Damian Gryski <dgryski@gmail.com> Reviewed-by: David Crawshaw <crawshaw@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Gordon Klaus authored
In particular, don't assume that one reflect.Value can be assigned to another just because they have the same reflect.Kind. Fixes #12401 Change-Id: Ia4605a5c46557ff8f8f1d44f26d492850666c6d1 Reviewed-on: https://go-review.googlesource.com/15420Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
David du Colombier authored
In CL 14836, the implementation of duffcopy on amd64 was changed to replace the use of the MOVQ instructions by MOVUPS. However, it broke the build on plan9/amd64, since Plan 9 doesn't allow floating point in note handler. This change disables the use of duffcopy on Plan 9. Fixes #12829. Change-Id: Ifd5b17b17977a1b631b16c3dfe2dc7ab4ad00507 Reviewed-on: https://go-review.googlesource.com/15421Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
-
Joe Tsai authored
Motivation: * The logic to verify the numEntries can overflow and incorrectly pass, allowing a malicious file to allocate arbitrary memory. * The use of strconv.ParseInt does not set the integer precision to 64bit, causing this code to work incorrectly on 32bit machines. Change-Id: I1b1571a750a84f2dde97cc329ed04fe2342aaa60 Reviewed-on: https://go-review.googlesource.com/15173Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-